Sie sind auf Seite 1von 8

2017 IEEE 19th International Conference on High Performance Computing and Communications; IEEE 15th International

Conference on Smart City; IEEE 3rd International Conference on Data Science and Systems

Energy Efficient Scheduling of Real-Time Tasks in Cloud


Environment
Sawinder Kaur, Manojit Ghose and Aryabartta Sahu
Department of Computer Science and Engineering, IIT Guwahati, India
email:{sawinder, g.manojit, asahu}@iitg.ernet.in
Abstract—Demand for cloud computing has increased tremendously in higher utilization) executes the task faster then the smaller VMs and the
recent time due to various benefits. Along with providing promised services, execution time of task on a VM is inversely proportional to the utilization
the cloud providers also need to focus on the amount of energy consumption of VM. We considered the cloud system provides k discrete types of
as it has many-fold benefits. Again scheduling algorithms play an important
role in minimizing the energy consumption of a system. Thus, in this paper, VMs and each VM type is specified with host utilization requirement.
we have proposed approaches for scheduling real time tasks on a virtualized Main aim of this work is to schedule a set of real time aperiodic
cloud system without missing their deadlines and minimizing the overall tasks on cloud environment. The scheduler chooses VM for each task.
energy consumption of the cloud system. We have divided the problem Resulting VM allocation must consume minimum amount of energy and
of scheduling real-time tasks on virtualized cloud system into four sub
problems, analyzed and solved them separately. We have provided exact all the tasks must complete before their respective deadlines. Such a
solutions for scheduling of one type of real time tasks (sub problem 1), scheduling approach comes with following complementary challenges.
created an approximated model for scheduling of two types of real time 1) Meeting deadlines - each task comes with a deadline and its
tasks with same deadline (sub problem 2). We have also extended the same execution time at maximum utilization. Using these two values,
approximated model for scheduling of many real time tasks with same
deadline (sub problem 3). Finally, we came up with four different approaches we need to find a minimum value of utilization at which a task
for scheduling of general real time tasks using deadline clustering approach must be executed so that it meets its deadline.
(sub problem 4) and compared their performance in minimizing the overall 2) Minimizing energy consumption- whenever a physical machine is
energy consumption of the cloud system. active, it consumes some amount of energy. Energy consumption
depends on the time for which the host is active and utilization of
I. I NTRODUCTION host during its active period.
The demand for solving compute intensive work is increasing day-
by-day and many organizations do not have the required infrastructure These two factors are contradicting because when a task executes
because of their high cost, so, the demand for shared computing at higher utilization, although it completes earlier, but consumes more
is increasing. Cloud computing is an internet-based resource sharing power. On the other hand, at lower utilization, power consumption is
computing platform which offers a large amount of space and computing lower but the task takes more time to complete and in the worst case it
capability to its users. In cloud, the services are provided on demand and may miss its deadline. Our objective is to find an optimal or near-optimal
payment for the resources is done on ’pay-as-you-go’ basis. So, without solution to this problem.
buying the needed infrastructure, users can still meet their computing In this work, we have formally computed an optimum value of host
requirements. Reduction in financial cost is one of the main reason for utilization at which the host consumes minimum amount of energy
the popularity of cloud computing. Energy consumption contributes a lot known as critical utilization (described later in Section III-B in detail).
towards the financial cost. Also, cloud system consuming higher energy Depending on specification of the task and the value of critical utiliza-
incurs more cooling cost and reducing energy consumption will reduce tion, we allocate a VM of suitable type to each task, so that, no deadline
the cooling cost of the cloud system. Thus, achieving energy efficiency is missed and minimum amount of energy is consumed. For scheduling
reduces the overall cost of task execution in the cloud environment [1], the selected VMs to the available physical hosts, we have divided our
[2]. Reduction in the energy consumption of the cloud system not only problem into several cases and solved each case separately.
provides monetary benefits to the service providers, it also helps to Rest of the paper is organized as follows: Section II describes the
reduce CO2 emissions [3]. previous work. Section III describes the considered system model,
Many cloud services provide Infrastructure as a service (IaaS) [4]. IaaS energy consumption model of the cloud system and problem definition.
cloud systems have virtualized resources in the form of virtual machines Section IV classifies the hosts of cloud systems and provides description
(VMs), which have different CPU, memory, disk and bandwidth capac- of scheduling approaches for the systems with extreme specifications.
ities. Many examples of cloud providers with IaaS are Google Compute Section V describes the scheduling approach for general systems. Section
Engine (GCE) [5], Microsoft Azure [6], Cisco Metapod [7], Joyent [8], VI compares the different clustering and scheduling approaches given in
Amazon EC2 [9], etc. They provide VMs with different configurations section V. We have concluded our work and described possible future
(in terms of number of CPU, memory, I/O bandwidth, storage) and costs. extension of the work in Section VII.
Real-time tasks are the one which have a deadline associated with
them. If a task does not complete before its deadline, some penalty may II. P REVIOUS W ORK
need to be paid. User’s tasks are submitted to a broker, the broker takes In energy efficient computing system and cloud system, good number
all the tasks at a time as a “bag of task” and executes the tasks on the of nice survey have been done by many researchers. Survey of energy or
cloud system. In energy efficient scheduling, the broker allocates one power consumption aware scheduling techniques like dynamic voltage
suitable VM for each task and executes the task on VM, and also the and frequency scaling (DVFS), dynamic power management (DPM),
cloud resource scheduler schedules and runs the VM on compute host sleep slates, etc. for general computing platform and cloud environment
of cloud system in such a way that it minimizes the over all energy is discussed in [11] by Bambagini et al., in [12] by Orgerie et al., in
consumption of the system. In [10], Pahlevan et al. shows trade-offs [13] by Dayarathna et al. and in [1] by Kaur et al.. In [12], Orgerie et al.
in energy consumption and performance when running a wide range of have given a survey of techniques for improving the energy efficiency
applications found in private and public clouds for traditional scale-out of large-scale distributed systems. In [11], Bambagini et al. surveyed
applications and demonstrates the benefits of near-threshold operation the energy-aware scheduling for real-time systems in general. In [14],
to increase the energy efficiency of server. Energy efficiency of servers they have given a detail survey of energy efficiency in cloud computing
are maximum at near-threshold operation or at critical utilization (or platform.
frequency). So maintaining load of the servers at critical utilization In cloud computing system or data center, the most of the research
increases energy efficiency of overall cloud system. focus on reducing number of active machine or host in the system, which
In this work, we consider VMs are specified with utilization values. is the key to reduce the energy consumption. Consolidation of workload
Bigger VMs have higher host utilization requirement and smaller VMs or VMs of under-loaded host to others reduces the number of active
are having smaller host utilization requirement. The purpose of this work host and achieve energy efficiency. Predict the workload and based on
is to propose approaches to select suitable type of VM for each user the prediction switch off and on the host accordingly to reduce power
requested real time task and to allocate the selected VMs to hosts. The consumption. In most of the technique, researcher use scale-up and scale-
tasks are scheduled on their respective VMs. Bigger VM (i.e. VM having down procedure to increase and decrease the active compute resource
978-1-5386-2588-0/17 $31.00 © 2017 IEEE 178
DOI 10.1109/HPCC-SmartCity-DSS.2017.23
{pm1 , pm2 , · · · , pmm }. So, specifications of all the physical ma-
vt1 PM1 chines are same. They have same computing power, storage and
t1
t2 vt2 PM2 disk capabilities. Several virtual machines can be hosted to a single
PM3 host and the utilization of a host gets shared among the virtual
machines depending on their types. These hosts consume energy
tn which depends on the total utilization of the host. All the hosts
vt k PMm possess one more property that is critical utilization, uc . It is
k VM types m PMs the utilization at which energy consumption is minimum (this is
defined formally in next section). We need to use the minimum
n Tasks Cloud System number of hosts as the amount of static energy consumed is directly
Fig. 1. System Model proportional to the number of active hosts.
• Virtual Resource Layer: This layer constitutes virtual machines
respectively to cope with workload and reduce energy consumption. which need to be hosted on physical machines for task execution.
Low level power optimization construct like DVFS, DPM and sleep We have taken k types of VMs based on the values of utilization
states are handled locally by the host of the system individually in host they provide to a task. The set of VM types is denoted by V T =
consolidation approaches. {vt1 , vt2 , · · · , vtk }. The utilization provided by these VM types is
Farahnakian et al. in [15], have described an energy-aware VM con- not continuous. For each VM type, there is no limit on the number
solidation in cloud data centers using linear regression based utilization of VMs. These VM types are characterized by the amount of CPU
prediction model. They consider both CPU and memory utilization utilization they provide to a task when hosted to a physical machine,
prediction of current load to consolidate the VMs in active hosts with also when the VM with utilization u runs on top a host, it consumes
maintaining the service level agreement (SLA) to the users with main- u fraction of compute (CPU) resource of the host. For a VM type
taining the load range of all the active host in the acceptable range. Ye et vtj , there is a constant uj , which is the amount of utilization that
al. [16] developed a profile based approach for workload consolidation vtj will provide to the task. The value of uj is between 0 and 1 that
and VM migration in virtualized data centers. They consider the impact is 0 < uj ≤ 1. When VMs are allocated to a host at a particular
of performance on consolidation and collocated workload. In [17], Xiao time, the sum of their total utilization must be less than or equal to
et al. used improved exponentially weighted moving average model to  l
predict future resource need and based on that they dynamically allocate 1, which is uj ≤ 1, where l is the number of VMs allocated to
j=1
resource using VM in cloud environment. They also consolidate the
that host and uj is the utilization of vmj .
workload of under-loaded host to others host to reduce the number of
For the sake of simplicity, we have considered five types of VMs
active host to reduce the energy consumption. Hieu et al. [18] focuses on
(k = 5) : tiny (T), small (S), medium (M), large (L) and extra
reducing the total number of active hosts so as to reduce the total power
large (XL) with discrete utilization values 0.2, 0.4, 0.6, 0.8 and 1,
consumption. In their approach, they predict near future demands are for
respectively. But the work can be easily extended for any value of
each host based on the local history and use the current and predicted
k.
future demands to decide whether a host is over or under utilized. They
• Task Layer : The users send requests to the cloud system in form of
reduce the number active host by moving all the VMs from under-utilized
tasks. The set or bag of n tasks is denoted by T = {t1 , t2 , · · · , tn }.
host to other hosts using power aware best fit decreasing policy and also
The tasks in user requests are independent. Each task is an indivisi-
maintain SLA by migrating some VMS of over-utilized hosts to others
ble unit which needs to be executed on one VM (and one host) only.
using minimum migration time policy.
Any such task ti can be described using 3-tuple: ti = (ai , ei , di ),
Wu et al. in [4] and [19] proposed a greedy strategy to schedule
where ai is arrival time, ei is the execution time when run at
scientific work-flows in a cloud system under budget constraints. Initially
maximum utilization (umax = 1) and di is the deadline for the
they allocate best possible VMs to all the tasks of the work flow and
task. We are considering synced tasks to be scheduled (ai = 0 for
iteratively downgrade the VM type to meet the budget and without
all the tasks). Therefore, tasks can be represented using 2-tuple in
meeting the deadline or SLA. If the best VMs for all the task not
our case ti = (ei , di ). The minimum value of utilization required
meeting deadline constraint and worst VMs are not meeting the budget
by ti is ui = deii . Therefore, if we execute the task at utilization
constraints they do not accept the request. In [20], authors have presented
ui , it finishes exactly at di . But the VMs available with us are
an approach for scheduling online real time aperiodic tasks in cloud
discrete (equi-spaced on utilization line). Let ui be the utilization
system. They use a approach called rolling horizon, where the scheduler
of least feasible VM type among the available k VM types for a
consider waiting tasks and currently arrived tasks in scheduling decision
task ti (ei , di ). Then can be written as
to meet deadlines of task with minimize energy consumption of the
1 ei
system. ui = 
k di
∗ k (1)
In our case, we have used energy consumption characteristics at
different utilization of the hosts and considered discrete types of VMs where k is the number of VM types available in the system. This
available for the user request. As energy consumption of host is not formula works for the cases when utilizations provided by VM types
uniform across different utilizations of the host and it follow a inverted is equally distributed over the range (0, 1.0]. In our case the value of
bell shape curve with minimum energy consumption at critical utilization k is 5. Equation 1 gives us the VM type with minimum utilization
due to static power consumption of the host. We analyzed and proposed value which can be used for task ti . All the VM types which provide
energy efficient scheduling approaches for real time aperiodic task on utilization value greater than or equal to u (as computed by equation
such system and we tries to maintain the total utilization of each host to 1) are suitable for the task ti . We do not allow a task with utilization
be approximately equal to critical utilization of the active hosts of the requirements more than 1 in our system.
cloud system.
B. Energy Model
III. P ROBLEM F ORMULATION : S YSTEM M ODEL AND P ROBLEM
D EFINITION An user task is allocated to a suitable VM type and the selected VM is
then hosted on the physical machine of the cloud system. When the tasks
A. System Model execute, physical machines consume some energy. Energy consumption
In most of the cloud systems, the user tasks get executed on virtual E is the amount of total power consumption during the active period
machines and the virtual machines get allocated to physical hosts. So the 
ttotal
of the physical machine. Which can be written as E = P (t)dt,
natural way to model the cloud system is a three layer system and these 0
layers are : task layer, virtual resource layer and physical layer. Cloud where P (t) is the power consumed by the host at time t and ttotal
consists of physical machines which hosts virtual resources to satisfy is the total time for which that host is active. P (t) has 2 components:
the needs of user requested tasks. static power consumption and dynamic power consumption. Static power
These layers can be described as follows: consumption (Pmin ) is the minimum amount of power consumed when
• Physical Layer: The cloud system that we are considering con- a host is switched on. Static power consumption depends on the host’s
sists of a set of m homogeneous physical machines P M =179internal activities and maintenance of tasks. Dynamic power consump-
S, M. L and XL type of VMs are 0.2, 0.4, 0.6, 0.8 and 1.0 respectively.
"# $   For simplicity, we considered five VM types and methodology developed



 can be extended for any k VM types. There is a one-to-one relation
 between VMs and tasks, that is, only one VM can be allocated to a task
 and a VM can support execution of a single task on it.

Given a set of user tasks T = {t1 , t2 , ..., tn }, of size n, we need to

   find a set of suitable VM type for each task and allocate all the resulting

VMs to physical machines. The objective of the resulting schedule is to
 minimize the amount of energy consumed without missing deadline of
    
any task.
 !! Since uc is the critical utilization for each physical machine, we try
Fig. 2. Energy consumption versus total utilization of the host to achieve this value of utilization for each host. If ν is the number of
tion (Pdyn (t)) varies with the current frequency of the host machine. So VMs allocated to a host, then the sum of values of utilization of these
the total power consumption can be written as P (t) = Pmin + Pdyn (t) VMs must be approximately equal to uc at a particular point of time.
and as stated in [13], dynamic power consumption be formulated as 
ν
uj ≈ uc , (6)
Pdyn (t) ∝ f (t)3 , where f (t) is the frequency of the host at time t. j=1
For single processor systems, we may safely assume that frequency is
directly proportional to the utilization u of the host. So, f (t) ∝ u(t), The tasks that need utilization above uc , are exceptions to this because
where u(t) is the utilization of the host at time t as stated in [13]. if executed at lower utilization, they won’t meet their deadlines. So, they
Therefore, we can say should be scheduled on separate hosts individually using a suitable VM
type. Now, the problem gets reduced to finding combination of VMs with
P (t) = Pmin + αu(t)3 , (2) ui ≈ uc , which can be allocated to the same host while minimizing the
where α is a constant. A similar power model has been used by Hsu et amount of energy consumed.
al. [21]. They considered P = α + xβ, where x varied with the value In our case, we try to not only minimize the number of active host
of utilization. with capable to running the workload without missing any deadline to
Also, time taken by a task of length e (execution time when run at reduce energy consumption but also run most of the hosts approximately
maximum utilization) to complete by running on VM with utilization u at critical utilization.
can be expressed as t = ue . Therefore, if we assume that utilization does
IV. C LASSIFICATION OF CLOUD SYSTEMS
not vary throughout the execution of the task, then, energy consumed
by the host can be computed as We have analyzed energy consumption characteristics of the system
    which are based on the value of critical utilization, uc (as calculated in
E = Pmin + αu3 . ue = e. Pmin u
+ αu2 (3) equation 4). The value of uc depends on the values of Pmin and α. Since
we are considering homogeneous cloud system, these values are same
Figure 2 shows energy consumption of tasks executed at different for all the hosts. Based on this, we have categorized our problem into
utilization values of a host with Pmin = 100 and α = 70. The resultant three types of systems, out of which two fall in the category of systems
plot is an inverted bell curve. The lower most point shows the minimum with extreme static power consumption and the third type refers to the
energy consumed by the host and the corresponding utilization is called systems with general specifications. In general systems, we classify the
critical utilization, uc . cloud system based on host or physical machine characteristics into three
At critical utilization uc , dE
du
= 0, from equation 3 categories and these categories or types are:
   
2 • Type 1: Host with negligible static power consumption and in this
dE
du
= e. Pmin
u
+ αu = 0, ⇒ e. − Pmin u2
+ 2αu = 0
type of cloud, the Pmin of the host is negligible with respect to
 αu3 (uc = 0).
3
⇒ uc = Pmin

(4) • Type 2: Host with significantly high static power consumption, and
critical utilization of the host is above 1 (uc > 1).
So, we can see that the value of critical utilization is independent of • Type 3: This is most common type of cloud, where the host critical
length of task executed on the system. It only depends on the values utilization uc lies between 0 and 1.
of Pmin and α. Also the total energy consumption can be written as
E = t(2αuc 3 + αu3 ), that is, Scheduling approaches for type 1 and type 2 systems are described in
subsections IV-B and IV-C, respectively and for type 3 systems, the
E = tα(2uc 3 + u3 ) (5)
scheduling approach is described in section V.
where t is execution time of task at utilization u and which is equal to For the case when 0 < uc ≤ 1, both static and dynamic power
e consumption play significant roles. The value of critical utilization lies
u
and uc is critical utilization of the host.
The power model P (t) = Pmin + αu(t)3 used in our work assume in the range (0, 1.0]. But the VMs available with us can provide only
that there is local power optimization module (DVFS/DPM) at each host. discrete values of utilization i.e., 0.2, 0.4, 0.6, 0.8, 1.0 for T, S, M, L, XL
The local optimization module at a host controls the frequency and sleep types of VM respectively. So when more than one task run in parallel
state of the compute system which may have more then one compute on a host, the total value of utilization may not be exactly equal to
components. As the power consumption is dependent on utilization of uc . Hence, we calculate a value of utilization ut by which uc can be
the host, we can safely assume when the host is running at highest increased when exact value of uc cannot be reached.
utilization, it is running at highest capable frequency of operation and As seen in section III, for a single physical machine, when only
the frequency of operation of host is proportional to utilization of the one task is executing, energy consumption is minimum at its critical
host. utilization, uc . If more than one task get scheduled to a machine, they
should execute for approximately same time, so that, total utilization of
C. Problem Definition the host remains same throughout the time for which it is active (basic
The homogeneous cloud system available at our disposal has a set of assumption while computing the value of uc ). Moreover, if this is not
m physical machines (where m is infinitely many), which is denoted the case, CPU cycle wastage will be there.
by P M = {pm1 , pm2 , ..., pmm }. Power consumption by each host So, while scheduling many tasks to a host, we assume the following
at time t is P (t) = Pmin + αu3 (t), where Pmin is the static power things: (a) tasks should have approximately same execution time and
consumption and u(t) is total CPU utilization of the host at time t. (b) utilization of active hosts should be approximately equal to critical
Energy consumption of a host in executing a task is minimum at critical utilization to minimize total energy consumption. But the overall energy
utilization uc . Five types of virtual machines (VM) are there to choose consumption of the system also depends on the number of active hosts.
from and these are (a) tiny (T), (b) small (S), (c) medium (M), (d) large Reducing the number of active hosts may help in decreasing the total
(L) and (e) extra large (XL). Each VM type is characterized by the energy consumption of the system. So, instead of switching on a new
amount of utilization it provides to a task. The utilization values for T,180host, we may prefer an active host to schedule the available tasks. This
"#$  ! " 

 




 
 

Running Running Running 
 
at  at t at  t Inactive 
 

 
 

 
       
     
Host1 Host2 Host1 Host2   !! 

Fig. 3. Options for scheduling the new task (a) Energy consumption versus uti- (b) Energy consumption versus uti-

  

lization for negligible Pmin (uc = 0) lization with uc > 1


Fig. 5. Energy consumption versus utilization of extreme cases


 



Algorithm 1 Scheduler for system with uc = 0
 Require: Schedule for real-time tasks

Ensure: All the tasks meet their deadline and minimum energy gets consumed
      1: for Each task ti in task set T = {t1 , t2 , ..., tn } do
 
   2: Take the VM type with utilization just above or equal to ui = dei
i
Fig. 4. Hot threshold (uc + ut ) versus uc 3: Host the resulting VM on a new host and execute the tasks on their
selected VM.
may result in increasing the utilization of that host by a small amount 4: end for
but the total energy consumption can get lesser as compared to energy
consumption in case of more number of active physical hosts. for task ti is the one which completes the task before deadline when the
Let ut be the value of utilization which serves as an upper limit on task is allocated to it, that is, the VM type with utilization greater than
ei
the amount of utilization by which utilization of a host can exceed uc . di
. The total energy spent will only depend on the squares of utilization
So, uc + ut can be referred as hot threshold of a host. Hot threshold of the hosts. Beloglazov et al. [22] have considered only the current
of a host is the value of utilization above which the host becomes over- utilization as the deciding factor for scheduling the user requests. But
utilized and we do not get any benefits in term of energy reduction by such systems fall in the category of systems with extreme specifications
scheduling more tasks on it. which can be solved trivially.
As shown in Algorithm 1, choose the smallest feasible VM type for
A. Calculating the value of ut for hosts each task in the task set T = {t1 , t2 , ..., tn }, which can complete the
Suppose one host is running at utilization uc and a new task needs to be task before deadline. Allocate all the selected VMs to separate physical
scheduled with utilization requirements ut . To choose whether to switch machines and execute the tasks on their respective allocated VMs. This
on a new host and schedule the task on it or to schedule the task on the schedule is based on the fact that the value of uc is 0 for each physical
already active host, we need to compare energy consumption in both the machine and we want its total utilization to be as close as possible to uc .
cases and chooses the one with least amount of energy consumption. Thus, using least feasible VM will give us the schedule with least energy
Figure 3 depicts the two choices for scheduling the new task. consumption. Also, energy consumption of a host is directly proportional
Let E1 and Enew be the energy consumption of an already active PM to the square of total utilization in this case (equation 3 with Pmin = 0).
and the new PM switched on for the incoming task, when the task is As we know, given a set of n positive utilizations, say {u1 , u2 , ..., un },
scheduled on new PM (as shown in left side of Figure 3). Also E1 be the sum of squares of these values will be less than the square of sum
the energy consumption of the already active host when the incoming of these values.
task gets scheduled to the active host instead of a new host (as shown u1 2 + u2 2 + ... + un 2 ≤ (u1 + u2 + ... + un )2
in right side of Figure 3). Also, ut ∈ (0, 1.0]. Let t be the execution
time of the tasks. Then the energy consumption E1 , Enew and E1 can So, we should schedule the selected VMs on separate hosts. Let the
be written as E1 = t(Pmin + αuc 3 ), Enew = t(Pmin + αut 3 ) and value of utilization of the chosen VM be ui for task ti , then, the energy
E1 = t(Pmin + α(uc + ut )3 ) respectively. The incoming task will be consumed in executing all the tasks can be computed as follows:
scheduled on already active hosts, if it is beneficial in term of energy 
n
consumption as compared to switching on a new host (even if it makes E=α (ei ∗ ui 2 ) (8)
total utilization of the host, u > uc ), which is E1 < E1 + Enew . So i=1

3 3 3 where ei is the execution time of task ti , when run at maximum


t(Pmin + α(uc + ut ) ) < t(Pmin + αuc + Pmin + αut ) utilization. The value of ei is given as input with each task.
⇒ 3αuc ut (uc + ut ) < Pmin
⇒ uc ut 2 + uc 2 ut < 23 uc 3 [Pmin = 2αuc 3 , Eqn 4] C. Hosts with significantly high static power consumption (uc > 1)

( 33−3)uc
This is a quadratic equation in ut . Thus, ut < 6
and this can Figure 5(b) shows the energy consumption versus utilization curve of
be simplified to a host with the value of critical utilization more than one. But for any
system, the total value of utilization can never exceed one, so we should
ut < 0.4574uc (7) take uc = 1 and schedule the tasks according to the general case, which
is discussed in section V. The reason behind choosing uc = 1 is that the
Figure 4 shows a graph for variation of the value of hot threshold
value of energy consumption strictly decreases with increase in the value
(uc + ut ) with respect to uc . Since ut is the value of utilization by
of utilization till the value of u reaches uc . Since the minimum value of
which we can exceed the total utilization above uc , therefore, while
energy consumption can be obtained at utilization value 1 among all the
allocating VMs we should target for uc +ut as total utilization. Whenever
possible utilization values, so choosing uc = 1 gives the best possible
uc + ut > 1, we round off its value to 1.
result.
B. Hosts with negligible static power consumption (uc = 0)
V. S CHEDULING M ETHODOLOGY FOR THE S YSTEMS WITH
Figure 5(a) shows the variation of energy consumption of a host
G ENERAL S PECIFICATIONS (0 < uc ≤ 1)
with respect to its total utilization. The host has negligible static
power consumption. So, energy consumption only depends on the tasks Based on the type of tasks in the request, the energy efficient
executing on the host. scheduling of real-time tasks can be done by dividing them into 4 cases
3
When Pmin is negligible as compared to αu , we can take its value (e and d refer to execution time of the task at maximum utilization and
to be 0 for all the hosts. As a result, we get uc = 0 from equation 4. the deadline of the task, respectively):
To maintain total utilization values as close as possible to uc , that is 0 1) Case 1 : All the n tasks are of same type i.e. ti (ei = e, di = d).
in this case, we should allocate a suitable VM type with least utilization 2) Case 2 : Two type of tasks with different execution times but same
value to a task and schedule them on separate hosts. A suitable VM type181 deadline i.e. ti (ei ∈ {e1 , e2 }, di = d).
3) Case 3 : Tasks with different execution times but same deadline i.e. Algorithm 2 Scheduling single type of tasks S(e, d, n)
ti (ei , di = d). 1: Allocate the VM with minimum utilization required by the task from T, S,
4) Case 4 : All the n tasks having their own ei and di , i.e. ti (ei , di ). M, L and XL. Let it be u.
2: Number of tasks per host when total utilization is more than uc , β ←  uuc 
The reason behind choosing this set of cases is that, the scheduling 3: Number of tasks per host when total utilization is less than uc , γ ←  uuc 
approach of every case is dependent on the scheduling approach for the 4: Find the number
 of 3hosts in each case. Let they be n1 and n2 , respectively
3
1 β −n2 γ
previous one (except for the first case, which depends on the scheduling 5: if uc < u ∗ 3 n2(n −n )
then
2 1
approach for base case). This will be clear from the detailed discussion 6: β1 ← γ  γ VMs per host consumes less energy
of scheduling approaches for these cases described in next subsections. 7: Number of hosts, m1 ← n2
8: else
A. Scheduling n tasks of same type (Case 1: (e, d)) 9: β1 ← β  β VMs per host consumes less energy
10: Number of hosts, m1 ← n1
This case refers to the requests with n tasks having same specifications 11: u1 ← u
ti (e, d). An iterative approach has been followed to solve this problem. 12: Similarly choose β2 and m2 for u2 = u1 + 0.2 if u1 ≤ 0.8
We start with a suitable VM type with least utilization value, that 13: if 2uc 3 (u1 m2 − u2 m1 ) < n1 u2 (β1 u1 )3 − n2 u1 (β2 u2 )3 then
completes the task within deadline and check if the VM type with next 14: uf ← u2 , βf ← β2 and mf ← m2  VM with higher utilization
higher utilization performs better. If uc is the critical utilization of the consumes less energy
hosts and tasks require a VM with utilization u, the number of VMs that 15: u ← u2 goto step 2
can be scheduled per host are β =  uuc  or γ =
uuc . For example, 16: else
when uc = 0.7 and u = 0.2, the values of β and γ are 4 and 3 17: uf ← u1 , βf ← β1 and mf ← m1  VM with lower utilization
respectively. Suppose β gets chosen out of these two, then, we have consumes less energy
these two following cases for the number of tasks getting scheduled: 18: Allocate VMs with utilization uf to the tasks.
19: Schedule βf number of VMs on each host.
1) If n is perfect multiple of β then all the tasks are scheduled
20: Schedule the remaining number of tasks i.e n − mf βf according to base
according to our method and the number of hosts required to case.
execute all the tasks will be m = n β
, where n is the number of
tasks in the request. with higher utilization will give lesser energy consumption when
2) If n is not multiple of β then m =
n β
− 1 number of hosts will e e
execute mβ tasks and the remaining tasks (n − mβ) are scheduled m1 . .α(2uc 3 + (β1 u1 )3 ) > m2 . .α(2uc 3 + (β2 u2 )3 )
u1 u2
according to the base case. The remaining number of tasks are m1 u2 (2uc 3 + (β1 u1 )3 ) > m2 u1 (2uc 3 + (β2 u2 )3 )
guaranteed to be less than 2β.
2uc 3 (u1 m2 − u2 m1 ) < m1 u2 (β1 u1 )3 − m2 u1 (β2 u2 )3 (11)
For scheduling the tasks in base case, we have used a first fit method.
For example, when the least feasible VM type is S and uc ∈
For this, we have sorted the tasks in decreasing order of their utilization
(0.50, 0.69], schedule using L type of VMs consumes lesser energy
requirements and the VMs are allocated to hosts using first fit approach
as compared to the schedule using S type of VMs.
for bin packing. Instead of taking bin capacities as 1, we take these as
uc + ut . Moreover, in this case, the tasks have same specifications, so,
we do not need to sort them. Also, the maximum value of β can be 5
(when ui = 0.2), the maximum number of tasks that will be in the base Pseudo-code for scheduling single type of tasks is shown in Algorithm
case are 10. 2. We start with finding the minimum utilization required by the tasks.
Energy consumption in this case is given by, Based on the minimum utilization required, we select the VM type with
the least utilization among the set of suitable VM types. We call the
e
E = m. .α(2uc 3 + (uβ)3 ) + Ebase (9)  of this VM type u1 . This can also be calculated as u =
utilization

u
e
e
d
∗ 5 /5. Then, we check the satisfiability of the relation given in
Here m is the number of active hosts, u is the execution time of a task
equation 10 to choose the number of tasks per host to execute. If it gets
(or we can say the time for which the hosts are active), α(2uc 3 +(uβ)3 ) satisfied then schedule with γ VMs per host performs better than the
is the amount of power consumption of each host and Ebase is the energy schedule with β VMs per host. The chosen number of VMs per host
consumption for scheduling of the tasks that are in base case. is set as β1 and the corresponding number of hosts is assigned to m1 .
We have a set of suitable VM types for a task which can complete Similarly, for the next VM type with utilization u2 , find the values of β2
the task before its deadline. For choosing the best VM type among the and m2 . Now, we check which among the two VM types gives us better
suitable VM types, we have computed certain relation through which results. For this, we check whether the relation given in equation 11
we can make the decision based on the value of uc of the hosts. gets satisfied. If yes, the higher VM type with utilization u2 gives better
1) Relation between β and γ for same VM type: Let u be the result than the VM type with utilization u1 and continue with u2 as u.
utilization provided by current VM type and n1 and n2 be the total Otherwise, u1 performs better than u2 . Store the final utilization values
number of hosts needed with β and γ number of VMs per host, as uf , βf and mf , where uf is the utilization provided by the selected
respectively. Schedule with γ number of VMs per host performs VM type, βf is the number of VMs to be scheduled on one host and mf
better than the schedule with β number of VMs per host when is the total number of hosts required (without considering the tasks that
total energy consumption in former case is lesser as compared to fall into base case). Allocate the tasks to VMs with utilization uf and
total energy consumption in the latter case (can be derived from schedule βf VMs on a host. Schedule the remaining tasks according to
equation 9). The high number (γ) of VMs per host is preferable if base case.
e e
n2 . .α(2uc 3 + (uγ)3 ) < n1 . .α(2uc 3 + (uβ)3 )
u u Algorithm 2 gives the schedule with lowest energy consumption, but,
2n2 uc 3 − 2n1 uc 3 < n1 (uβ)3 − n2 (uγ)3 we have some special cases, where the two relations in equation 10
 and 11, can be simplified further. They would not make any change to
n1 β 3 − n2 γ 3
uc < u 3 (10) the decision made by the algorithm, so, whenever the task set of user
2(n2 − n1 ) request falls in one these cases, we can safely replace the relations with
For example, when the least feasible VM type is S and uc ∈ the resulting ones. These special cases are:
(0.40, 0.49], the resulting values of β and γ are 2 and 1, respec-
tively. And in this case, schedule with one VM per host consumes • Relation between energy consumption with β and γ number
lesser energy as compared to the schedule with two VMs per host. of tasks per host when total number of tasks is a multiple of
2) Relation between two different VM types: Let u1 and u2 be both: Let n be the number of tasks in user request which satisfies
the values of utilizations provided by the two VM types which we n%β = 0 and n%γ = 0. Let n1 and n2 be the number of hosts in
need to compare. The corresponding number of tasks per host be case of β and γ number of VMs per host, respectively. Then n1 =
β1 and β2 , as a result of above relation. Let m1 and m2 be the n
β
and n2 = nγ . Energy consumption is lower with γ number of
corresponding number of hosts and u1 < u2 . Choosing the VM182 VMs per host when a VM with utilization u is used if the following
condition is satisfied Algorithm 3 Scheduling two types of tasks with same deadline
n e n e (D(e1 , e2 , d, n1 , n2 ))
. .α(2uc 3 + (uγ)3 ) < . .α(2uc 3 + (uβ)3 )
γ u β u 1: Let u1 and u2 be the utilization of VM types which satisfies the minimum
2uc 3 + (uγ)3 2uc 3 + (u(γ + 1))3 requirements of tasks with specification (e1 , d) and (e2 , d), respectively
⇒ < [as β = γ + 1] 2: Compute (β1 , β2 ) such that total utilization is approximately equal to uc
γ γ+1
and should not be more than uc + ut .
⇒(γ + 1)(2uc + (uγ) ) < γ(2uc + (u(γ + 1))3 )
3 3 3
3: for each combination c of (β1 , β2 ) do
⇒2uc 3 + (uγ)3 < γu3 (3γ 2 + 3γ + 1) 4: if  n
β1
1
 > n 2
β2
 then
 5: swap the two types of tasks.
3 2
3 γu (2γ + 3γ + 1) 6: number of hosts with both type of tasks, numHostsc ←  n 1

⇒ uc < . (12) n1
β1
2 7: Remaining second type task N Rc ← n2 −  β β2
1
So if the above equation is satisfied then the γ number of VMs per 8: Ec ←  n 1
β1
d(2uc 3 + (u1 β1 + u2 β2 )3 ) + Epart + S(e2 , d, N Rc )
host is allocated to have minimum energy consumption, otherwise 9: end for
β number of VMs per host have lesser energy consumption. 10: Esingle ← S(e1 , d, n1 ) + S(e2 , d, n2 ) + Epart
11: Compare energy consumption in all the cases and choose least one.
• Comparing energy consumption by two different VM types
when number of tasks is a multiple of both β1 and β2 : Let
to all the tasks and to approximate their execution time to deadline d. The
n is the number of tasks and u1 and u2 (u1 < u2 ) be the values of
reason behind choosing the least feasible VM type is that it completes
utilizations provided by the two VM types that need to be compared.
the task just before its deadline. All the tasks have same deadline in
If β1 and β2 are the respective number of tasks per host and n
this case, so, if we allocate them to their least feasible VM type, the
satisfies n%β1 = 0 and n%β2 = 0 , then energy consumption is
execution time of all the tasks will be approximately equal to deadline
lower at u2 if:
d. Now, let u1 and u2 be the utilization of the VM types allocated
n e n e
. .α(2uc 3 + (u2 β2 )3 ) < . .α(2uc 3 + (u1 β1 )3 ) to the two types of tasks and β1 and β2 be the non-zero number of
β2 u2 β 1 u1 tasks of each type on a host. When either of the two are zero, the tasks
u1 β1 (2uc 3 + (u2 β2 )3 ) < u2 β2 (2uc 3 + (u1 β1 )3 ) run individually. Also, based on the values of uc , u1 and u2 , the pair
2uc 3 (u1 β1 − u2 β2 ) < u2 β2 (u1 β1 )3 − u1 β1 (u2 β2 )3 (β1 , β2 ) may assume several values. The decision of whether to run the
2uc 3 (u1 β1 − u2 β2 ) < u1 u2 β1 β2 tasks individually or in one of the combinations need to be made. For
this, we compare the total energy consumption in each case and choose
((u1 β1 )2 − (u2 β2 )2 )
the one with least value.
3
2uc (u1 β1 − u2 β2 ) < u1 u2 β1 β2 (u1 β1 − u2 β2 ) The pseudo-code for scheduling two type of tasks with same deadline
(u1 β1 + u2 β2 ) is given in Algorithm 3. We first find the least feasible VM that can be al-
The final relation depends on the value of (u1 β1 − u2 β2 ). located to both the task types. Let u1 and u2 be the utilization of selected
VM types. To obtain total utilization of the hosts approximately equal
2uc 3 > u1 u2 β1 β2 (u1 β1 + u2 β2 ) (u1 β1 − u2 β2 ) < 0 to uc (preferable higher), these VM types can be scheduled together.
2uc 3 < u1 u2 β1 β2 (u1 β1 + u2 β2 ) else Let β1 and β2 be the corresponding number of VMs on one host, when
they are scheduled together. Also, there is an option for scheduling both
⎧  the type of tasks individually (S(e1 , d, n1 ) and S(e2 , d, n2 ) scheduled
⎨u > 3 u1 u2 β1 β2 (u1 β1 +u2 β2 ) (u1 β1 − u2 β2 ) < 0
c described in Algorithm 2). So, for all the available options, we compute
⇒  2 (13)
⎩u < 3 u1 u2 β1 β2 (u1 β1 +u2 β2 ) else the energy consumption and choose the best one out of these options.
c 2
Since we are comparing all the cases exhaustively and choosing the best
• When β1 and β2 are equal in the above case: This case refers to one, we always get the result with least energy consumption.
a situation where both values of utilization result in same β values.
Since u2 > u1 and β1 = β2 , say  β, the value of u1 β1 − u2 β2 will
C. Scheduling approach for the requests with multiple number of task
always be less than 0. Therefore, 3 12 u1 u2 ββ(u1 β + u2 β) can be types having same deadline (Case 3 : (ei , d))
 This case refers to the requests where tasks may take different
written as β 3 12 u1 u2 (u1 + u2 ). So if the following condition execution times at utilization 1 but have same deadline. For each task,
 we initially allocate the least feasible VM type as done in previous case.
3 1
uc > β u1 u2 (u1 + u2 ) (14) This way the tasks get classified into 5 categories based on the category
2 of the VM type allocated to them. These categories contain tasks that
is satisfied then, the VM type with higher utilization will consume need T, S, M, L and XL VM types respectively.
lesser amount of energy as compared to the VM type with lesser One important observation for solving this problem is that, there
utilization value. cannot be VMs from more than two categories on a same host as
number of discrete VM types is 5 where all VM types are equi-spaced
in terms of utilization. This can be seen from the following example,
B. Scheduling approach for two types of tasks having same deadline that even with u = 1.0, which is highest among all the possible
c
(Case 2 : (e1 , d) and (e2 , d)) values of uc , not more than two types of VMs could be combined.
This case refers to the requests where there are only 2 types of tasks. There are at most five possible combinations of VM types for any value
They may have any one of the two available execution times but they of uc (sum of utilization values of least powerful VMs T,S and M is
ought to have same deadline. Let n1 and n2 be the number of tasks 0.2 + 0.4 + 0.6 = 1.2 > 1, which is the maximum possible utilization of
with specification (e1 , d) and (e2 , d), respectively. These tasks may run the host). These combinations are: (a) one VM of type T with one VM
independently on separate hosts or may run in together on same host. of type L, (b) two VMs of type T with one VM of type M, (c) one VM
When the tasks run individually, the problem gets reduced to case 1, of type S with one VM of type M, (d) three VMs of type T with one VM
but when they run together, total energy consumption will have two of type S and (e) one VM of type T and two VMs of type S. Even if we
components: execute them separately, the total utilization will never be 1.0 except for
1) Energy consumption of the hosts where both type of tasks reside. u = 0.2. As we can see, even with the highest value of utilization only
2) Energy consumption of the hosts where single type of tasks reside. five combinations are possible, so the number of combinations will be
even lesser with smaller values of utilization. So, we can exhaustively
check each of the possible cases and choose the best option. For each
The second component occurs when one type of task is much higher in of the possible combinations, we use the approach used for solving case
number than the other type of tasks and all the former type of tasks could 2, as both the problems are same, i.e. two types of tasks having same
not be combined with the latter type. As discussed in previous case, the deadline.
tasks running on same host must have approximately same execution Scheduling approach for case 3 is given in Figure 6. We start with
time so that the total utilization of the host remains same throughout its all the five categories of tasks which have been allocated to VMs of
active period. For achieving this, we allocate the least feasible VM type183types T, S, M, L and XL, respectively. Since the XL VMs cannot be
ei ; d); u (SC(ei , d, n) as given in flow chart of Figure 6), separately. For this,
we first sort the task set in increasing order of their deadlines and then
Allocate least feasible VM to each task
select the VM with least utilization value from the set of suitable VM
chdin X VMs types. To divide the task set into clusters, we have applied one of the
methodologies described below to the task set such that tasks belonging
yes no to same cluster map to same value of deadline which may not be the
u < 08
actual deadline of the tasks. Each cluster is then executed according to
chdin  VMs
case 3 ((SC(ei , d, n) as given in flow chart of Figure 6)) where the
Schedule 1 Schedule 2 Schedule 3 Schedule 4 tasks may have different execution times but same deadline. We have
-L with T -L withT -M with S -M with T designed and used four methodologies for clustering the tasks according
yes u < 06 no -M with S -M with T -L with T -L with T

-S with T -S with T -S with T -S with T
to their deadline are described as follows:
1) No change in utilization requirements of the tasks (CLT1): Figure
chdin M VMs
7(a) describes this clustering technique. The horizontal axis repre-
Choose the best schedule
yes no sents the time axis and tasks are plotted at their respective deadlines.
u < 04
Schedule 1 Schedule 2
Vertical lines in the time axis (x-axis) represent the task deadlines.
Schdule S -M with S -M with T We initialize the first cluster with the first task. Let the deadline
chdin S VMs with T VMs -S with T -S with T of the cluster be same as the deadline of this task. We keep on
adding tasks to the cluster until they need a higher VM type than
chdin T VMs Choose the best schedule
their initial allocated one (their actual deadline of the added task is
Fig. 6. Scheduling approach for case 3: SC3(ei , d, n) changed to the deadline of the cluster, by shifting deadline of the
scheduled with any other VMs, therefore, tasks of that category get task towards left in the time axis). We start a new cluster whenever
scheduled individually on separate hosts (which is shown in Figure 6 such a task is found. This process is repeated until all the tasks
as Schdind XL V M s)). The combination of VM types depends on the are mapped to some cluster or the other. When deadline of a task
value of critical utilization, so, depending on the values of uc , there are is shifted toward left side of the time axis, the absolute deadline
several possible flows and we choose the one which consumes least time of task is decreased, so the utilization requirement of task is
amount of energy. The tasks with VM types L and XL were given increased. Suppose a task require utilization 0.12 (T type VM) and
preference in scheduling order because they can only be combined with the deadline can be shifted left of the time axis till the utilization
tasks allocated to T and S types of VMs. And if we schedule the latter requirement the task is equal to less than the 0.2 (same T type VM).
ones first, we would not have any tasks to combine with tasks having 2) Utilization requirement of the task change by at most 1 level
higher utilization requirements. (CLT2): The process is same as above with a small change that
As shown in flow chart, we consider these four conditions and based tasks may be allocated the next higher VM type if required, that is
on that we decide about scheduling of VMs and these cases are up-gradation of VM types is allowed but only by one step. Figure
7(b) describes this clustering technique.
• Case (0.8 ≤ uc < 1): We select one the four schedules (a) L type
3) Utilization requirement may change up to ui = 1 and task is
VM combined with T type VM, followed by M type VM combined
allocated to the nearest cluster (CLT3): In this technique, VM up-
with S type VM and followed by S type VM combined with T
gradation to the best type is allowed. Remaining process is same as
type VMs, (b) L type VM combined with T type VM, followed by
in the above two clustering techniques. Figure 7(c) describes this
M type VM combined with T type VM and followed by S type
clustering technique. The marked task is eligible for both clusters
VM combined with T type VMs, (c) M type VM combined with
C1 (marked with dotted line) and C2, but this clustering technique
S type VM, followed by L type VM combined with T type VM
chooses C2 for the task as it is the cluster with closest deadline.
and followed by S type VM combined with T type VMs and (d)
4) Utilization requirement may change up to ui = 1 and task is
M type VM combined with T type VM, followed by S type VM
allocated to the cluster with lowest ID (CLT4): Initialize the first
combined with S type VM and followed by S type VM combined
cluster with the first task. Let the deadline of the cluster be same
with T type VMs. The combination schedule are specified in the
as deadline of this task. Iterate over all the tasks and check if they
flow chart (shown in Figure 6), among all these schedule which has
can be allocated to the current cluster. Start the new cluster from
the least value of total energy consumption (ties broken arbitrarily)
the first task which could not be allocated to the previous cluster.
gets selected. While proceeding in one schedule, only the tasks left
Repeat until all the tasks are covered in one cluster or the other
in one step goes to the next. Also, we aim to schedule all the tasks
by considering only uncovered tasks in each iteration. Proceeding
with higher VM type in one step and the remaining tasks with lower
this way, the tasks cannot go into a cluster of higher ID even if
VM types, if any, goes to the next steps.
it satisfies the requirements. Figure 7(d) describes this clustering
• Case (0.6 ≤ uc < 0.8): In this case, we schedule L type VMs
technique. The marked task is eligible for both clusters C1 and C2
individually on separate hosts and further check whether the value
(marked with dotted line), but this clustering techniques chooses
of uc ≤ 0.6. For uc > 0, we have 2 possible schedules (a) M type
C1 for the task as it is the cluster with lowest ID.
VM combined with S type VM, followed by S type VM combined
with T type VM, and (b) M type VM combined with T type VM,
followed by S type VM combined with T type VMs. We compare VI. P ERFORMANCE E VALUATION
the total energy consumption in both cases and choose the schedule In the previous section, we described four clustering techniques for
with least energy consumption. dividing the task set into clusters with same deadline. In this section,
• case (0.4 ≤ uc < 0.6): In this case, the M type VMs get scheduled
we compare the total energy consumed by the task set when clustered
individually in separate hosts and check for running the two types according to these techniques. The task set is generated by randomly
of VMs in parallel, with respect to the schedule for case 2 (using computing their execution time at utilization 1 and deadline. The tasks
D(e1 , e2 , d, n1 , n2 ) as described in Algorithm 2) and scheduled are then sorted according to their deadline. This sorted task set is given as
accordingly. input to each of the clustering techniques, which schedules the resulting
• Case (uc < 0.4): In this case, we schedule T and S type of VMs
clusters according to case 3 (SC(ei , d, n) as given in flow chart of
on separate hosts and this minimized the energy consumption of Figure 6).
the hosts of the cloud system. Since we are using a homogeneous cloud system, all the hosts have
same specifications. These hosts are characterized by the value of
critical utilization, uc . Therefore, the cloud system itself can also be
D. Scheduling approach for general synced real-time tasks (Case 4 : characterized by this value. We have performed our experiments with
(ei , di )) different variations which are stated below.
This is the general case where there is no restriction on the values 1) Keeping the number of tasks fixed, we have computed the energy
of execution time and deadline of the tasks. For solving this, we want consumption for different set of tasks. Figure 8(a) shows a compar-
to divide the tasks in several task sets having same deadlines, so that, ison of energy consumption values between all the four clustering
we can schedule all the sets according to the schedule for case 3184 techniques for the same cloud system. We have taken uc = 0.73 in
C1 C2 C3 C1 C2 C3
 deC2  =
  deC2   deC3  =
  deC3 
C2 C1 C3 C2

(eC1 , dC1 ) (eC2 , dC2 ) (eC3 , dC3 ) (eC1 , dC1 ) (eC2 , dC2 ) (eC3 , dC3 )

 deii  =  dei   deii  =  dei   deii  =  dei   dei  ≤  deii  + 0.2  dei  ≤  deii  + 0.2  dei  ≤  deii  + 0.2
C1 C2 C3 C1 C2 C3

(a) CLT1 (b) CLT2


C1 C2 C1 C2

(eC1 , dC1 ) (eC2 , dC2 ) task goes to the nearest cluster (eC1 , dC1 ) (eC2 , dC2 ) task goes to the lowest ID cluster

 dei  ≤1  dei  ≤1  dei  ≤1  dei  ≤ 1


C1 C2 C1 C2

(c) CLT3 (d) CLT4


Fig. 7. Clustering description for CLTs

   
 
  !"#


 


  !"#       
 
 $ $ $ $ $% $% $%# $%  !  !  !"  !

 


 





 
 

 



 


 


  

 
 
 
          


 
    

(a) different task mixes (b) different number of tasks (c) different uc
Fig. 8. Energy consumption of system
this case. As we can see, clustering techniques CLT4 gives us the will be excellent research direction. Considering system with preemption
least energy consumption for all the task mixes. and migration can also be a future extension of this work.
2) Figure 8(b) shows a comparison of energy consumption values
R EFERENCES
between all the four clustering techniques, when the requests
have different number of tasks. We have normalized the energy [1] Tarandeep Kaur and Inderveer Chana. Energy Efficiency Techniques
in Cloud Computing: A Survey and Taxonomy. ACM Comput. Surv.,
consumption of all the clustering techniques with respect to CLT1 48(2):22:1–22:46, October 2015.
for same task set. This comparison is done for same cloud system [2] R. Buyya et al. Cloud computing and emerging it platforms: Vision, hype,
with uc = 0.73. As we can see, similar pattern is being followed and reality for delivering computing as the 5th utility. Futu. Gen. Com. Sys.,
25(6):599 – 616, 2009.
for all the task sets. Here CLT4 is giving least energy consumption [3] Z. Li et al. Cost and energy aware scheduling algorithm for scientific
in all the cases. workflows with deadline constraint in clouds. IEEE Tran. on Serv. Compu.,
PP(99):1–1, 2017.
3) Keeping the task set same, Figure 8(c) shows a comparison of [4] C. Q. Wu, X. Lin, D. Yu, W. Xu, and L. Li. End-to-End Delay Minimization
energy consumption values between all the four clustering tech- for Scientific Workflows in Clouds under Budget Constraint. IEEE Trans.
niques on different cloud system. Here also, we have compared the on Cloud Computing, 3(2):169–181, April 2015.
[5] https://cloud.google.com/compute/.
normalized values of energy consumption with respect to CLT1 for [6] https://azure.microsoft.com/en-in/.
the same cloud system. Cloud systems with lower critical utilization [7] http://www.cisco.com/c/en/us/products/cloud-systems-
management/metacloud/index.html.
value gives better results with clustering techniques CLT1, CLT2 [8] https://www.joyent.com/.
and CLT3, whereas, cloud systems with higher critical utilization [9] https://aws.amazon.com/ec2/.
values give better results with CLT4. [10] A. Pahlevan et al. Towards near-threshold server processors. In DATE, pages
7–12, March 2016.
[11] M. Bambagini et al. Energy-Aware Scheduling for Real-Time Systems: A
From the above results, we can say that choosing the clustering Survey. ACM Trans. Embed. Comput. Syst., 15(1):7:1–7:34, January 2016.
[12] Anne-Cecile Orgerie et al. A Survey on Techniques for Improving the
techniques does not depend on the user request, but on the specification Energy Efficiency of Large-scale Distributed Systems. ACM Comput. Surv.,
of the cloud system. Since, the value of uc is known for a cloud system, 46(4):47:1–47:31, March 2014.
we can choose the clustering technique, without looking at the task mix. [13] M. Dayarathna, Y. Wen, and R. Fan. Data Center Energy Consumption
Modeling: A Survey. IEEE Commun. Surv. Tutorials, 18(1):732–794, 2016.
For cloud systems with lower critical utilization values, that is uc ≤ 0.6, [14] T. Mastelic et al. Cloud Computing: Survey on Energy Efficiency. ACM
choose any one of CLT1, CLT2 or CLT3, as they all produce comparable Comput. Surv., 47(2):33:1–33:36, December 2014.
results. For systems with higher uc , it is better to choose CLT4 as the [15] F. Farahnakian, T. Pahikkala, P. Liljeberg, and et al. Energy-aware VM
Consolidation in Cloud Data Centers Using Utilization Prediction Model.
clustering technique to minimize the energy consumption of the system. IEEE Trans. on Cloud Computing, PP(99):1–1, 2016.
[16] K. Ye et al. Profiling-Based Workload Consolidation and Migration
in Virtualized Data Centers. IEEE Trans. on Parallel and Dist. Syst.,
VII. C ONCLUSION AND FUTURE WORK 26(3):878–890, 2015.
[17] Z. Xiao, W. Song, and Q. Chen. Dynamic Resource Allocation Using Virtual
Machines for Cloud Computing Environment. IEEE Trans. on Parallel and
In this work, we investigate the problem of energy efficient bag of Distributed Systems, 24(6):1107–1117, June 2013.
real-time task scheduling in cloud environment, where the task can be [18] N. T. Hieu, M. Di Francesco, and A. Ylä-Jääski. VM Consolidation with
allocated to a set of discrete VM types provided by the cloud system. Usage Prediction for Energy-Efficient Cloud Data Centers. In IEEE Int.
Conf. on Cloud Computing, pages 750–757, June 2015.
In this work, we have carefully used the energy consumption verses the [19] C. Q. Wu and H. Cao. Optimizing the Performance of Big Data Workflows
utilization characteristics of the hosts of the cloud system to minimize the in Multi-cloud Environments Under Budget Constraint. In 2016 IEEE Int.
Conf. on Services Computing (SCC), pages 138–145, June 2016.
overall energy consumption of the system. In this work, we divided the [20] X. Zhu et al. Real-Time Tasks Oriented Energy-Aware Scheduling in
problem of energy efficient scheduling of set of real time tasks in cloud Virtualized Clouds. IEEE Trans. on Cloud Comp., 2(2):168–180, 2014.
to four smaller sub problems and then solved then individually. All the [21] Ching-Hsien Hsu et al. Optimizing Energy Consumption with Task Con-
solidation in Clouds. Information Sciences, 258:452 – 462, 2014.
scheduling policies were driven by the idea that the energy consumption [22] A. Beloglazov and R. Buyya. Energy Efficient Resource Management in
of a host is minimum at its critical utilization. Extending this work for Virtualized Cloud Data Centers. In IEEE/ACM Int. Conf. on Cluster, Cloud
the cloud system with heterogeneous host and for online real time tasks and Grid Computing, pages 826–831, May 2010.
185

Das könnte Ihnen auch gefallen