Beruflich Dokumente
Kultur Dokumente
I.
I NTRODUCTION
2 http://www.google.com.au/green/energy/
3 http://www.microsoft.com/environment/renewable.aspx/.
4 http://aws.amazon.com/about-aws/sustainable-energy/.
Fig. 1.
System Model.
S YSTEM M ODEL
2)
3)
Most cloud providers offer their services in multiple locations world-wide called regions. For example, Amazon Web
Services (AWS)6 currently has data centers located in 9
different regions; three in Asia Pacific, two in Europe, one
in the US east, two in the US west, and one in South America.
Each region has a collection of n isolated data centers D within
the region, often called zone. These data centers are located
in geographically diverse locations within a region and each
uses various sources of energy. In our model, we assume data
center di D (1 i n) has two main sources of electricity:
power generated by local on-site renewable energy sources
(e.g., solar, wind or mixture) and utility grid.
Suppose that the cloud provider receives a set of m service
requests R = {r1 , ..., rm }, in the specific time window, from
cloud users who do not have preferences over data centers
in the region. A request can vary from the one submitted for
acquiring container-based instances or hypervisor-based virtual
machines (VMs) to a job-based request for running an application in the specific time period. A request rj (1 j m)
is denoted by three-tuple (aj , qj , lj ), where aj is the arrival
time of the request within the time window, qj is the required
quantity of processing units (e.g., total number of ECUs7 for
VMs), and lj is the lifetime of the request (i.e., the holding
time of VMs). In practice, the lifetime and arrival time of a
request is not known by the provider in advance, e.g., a cloud
provider does not know when next VM request arrives and
how long the user will hold allocated VMs. Note that, in our
model, requests must be served based on the order of arrival
in a first come first served fashion. No request can be delayed
in favor of another request.
We consider a discrete-time model in which the amount of
power generated by on-site renewable energy sources per each
time slot (e.g., kWh) is given for every data center. Let ei,t
denote the units of power generated from renewable sources
at time slot t in data center i and not consumed by currently
accommodated requests. The value of ei,t dynamically changes
based on the weather condition and availability of renewable
6 Amazon
where Ri,t is the set of requests that are active in data center
i at time t.
Suppose that the cloud provider can precisely predict the
following values for every time slot t in a future time window
of size T (1 t T ):
1)
2)
3)
4)
Quantity
Lifetime
I
6
3
2
H
5
2
2
G
3
1
3
F
3
1
2
E
2
1
3
D
1
2
5
Load Balancing
Algorithm
C
d2
T=6
0.3
Data Center d1
0.4
0.4
0.2
0.2
0.1
B
d1
p1,t
A
d1
H
B
A
...
Request
Arrival
Time
0.5
0.3
0.2
0.3
0.4
0.5
Data Center d2
Fig. 2.
t=1
t=2
t=3
t=4
t=5
t=6
computed as:
C = min
n X
T
X
(2)
i=1 t=1
e
d1
d2
r1
d3
e
d1 d
2
Fig. 3.
d3
r2
c(rj , di ) =
(qj ei,t ) pt ,
(3)
t=aj
TABLE I.
Rule base
Inputs
Fuzzification
Fuzzy Inference
Defuzzification
Ui
low
low
mid
mid
high
high
high
high
high
high
Output
Membership
Functions
Fig. 4.
Fi
low
mid
high
low
mid
high
Suitability
veryhigh
high
high
mid
mid
mid
low
mid
low
verylow
2)
3)
Bi
low
high
low
high
low
low
low
high
high
high
P ERFORMANCE E VALUATION
mid
high
0.8
Normalized Workload
Degree of membership
low
1
0.6
0.4
0.2
0
0
0.2
0.4
0.6
0.8
Ui
Degree of membership
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
Bi
Degree of membership
low
1
mid
high
0.8
0.6
0.4
0.2
0
0.2
0.4
0.6
0.8
Fi
Degree of membership
0.2
10
15
20
low
mid
high
veryhigh
0.8
30
Normalized workload.
0.6
0.4
0.2
0
0
0.2
0.4
0.6
Suitability
0.8
25
day
high
0.8
verylow
1
0.4
Fig. 6.
low
1
0.6
0.8
A. Experimental Setup
1) Workload Setup: In order to generate IaaS workload for
our case study, we use traces of Google cluster-usage [7]; as
no publicly available workload trace of real-world IaaS clouds
currently exists that we know of. This dataset includes the
resource requirements of tasks submitted by 933 users to a
Google cluster of 12K physical servers over a time period
of 29 days. In our settings, we are interested in a workload
SP-15
Los Angeles
Fig. 7.
Phoenix
Palo Verde
Palo Verde
SP-15
0.6
0.65
0.55
0.5
0.45
0.4
0.35
0
10
15
20
25
30
#10 5
NM
AZ
LA
0
1
day
day
Fig. 8.
Zone
LA
AZ
NM
0.8
0.6
0.4
0.2
1.00
0.95
Normalized Cost
NormalizedCost
1.0
0.90
0.85
0.80
Algorithm
FLB
FABEF
0.75
0.0
FABEF
FLB
HAREF
0.70
NormalizedGreenPowerUtilization
Fig. 10. The total cost of different algorithms normalized to the outcome of
the RR algorithm.
3 4 5 6 7 8 9 10 11 12
Past/Future Window Size (hour)
Fig. 12. Effect of window size on the total cost performance of FLB and
FABEF normalized to the outcome of the RR algorithm.
1.10
1.08
1.06
VI.
1.04
1.02
1.00
FABEF
FLB
HAREF
R ELATED W ORK
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
R EFERENCES
[1]
[2]
[3]
[4]
[5]
[22]
[23]
[24]
[25]