Sie sind auf Seite 1von 5

TOPIC : 1.

The Akamai Network: A Platform for High-Performance Internet


Applications

AUTHOR: E. Nygren, R. K. Sitaraman, and J. Sun

ABSTRACT: Comprising more than 61,000 servers located across nearly 1,000
networks in 70 countries worldwide, the Akamai platform delivers hundreds of
billions of Internet interactions daily, helping thousands of enterprises boost the
performance and reliability of their Internet applications. In this paper, we give an
overview of the components and capabilities of this large-scale distributed
computing platform, and offer some insight into its architecture, design principles,
operation, and management.
TOPIC: 2. Energy information transmission tradeoff in green cloud computing

AUTHOR: Amir-Hamed Mohsenian-Rad and Alberto Leon-Garcia

ABSTRACT:

With the rise of Internet-scale systems and cloud computing services, there is an
increasing trend towards building massive, energy-hungry, and geographically
distributed data centers. Due to their enormous energy consumption, data centers
are expected to have major impact on the electric grid and potentially the amount
of greenhouse gas emissions and carbon footprint. In this regard, the locations that
are selected to build future data centers as well as the service load to be routed to
each data center after it is built need to be carefully studied given various
environmental, cost, and quality-of-service considerations. To gain insights into
these problems, we develop an optimizationbased framework, where the objective
functions range from minimizing the energy cost to minimizing the carbon
footprint subject to essential quality-of-service constraints. We show that in
multiple scenarios, these objectives can be conflicting leading to an energy-
information tradeoff in green cloud computing.
TOPIC: 3. Intelligent Placement of Datacenters for Internet Services

AUTHOR: I. Goiri, K. Le, J. Guitart, J. Torres, and R. Bianchini

ABSTRACT:

Popular Internet services are hosted by multiple geographically distributed data


centers. The location of the data centers has a direct impact on the services'
response times, capital and operational costs, and (indirect) carbon dioxide
emissions. Selecting a location involves many important considerations, including
its proximity to population centers, power plants, and network backbones, the
source of the electricity in the region, the electricity, land, and water prices at the
location, and the average temperatures at the location. As there can be many
potential locations and many issues to consider for each of them, the selection
process can be extremely involved and time-consuming. In this paper, we focus on
the selection process and its automation. Specifically, we propose a framework that
formalizes the process as a non-linear cost optimization problem, and approaches
for solving the problem. Based on the framework, we characterize areas across the
United States as potential locations for data centers, and delve deeper into seven
interesting locations. Using the framework and our solution approaches, we
illustrate the selection trade offs by quantifying the minimum cost of (1) achieving
different response times, availability levels, and consistency times, and (2)
restricting services to green energy and chiller-less data centers. Among other
interesting results, we demonstrate that the intelligent placement of data centers
can save millions of dollars under a variety of conditions. We also demonstrate that
the selection process is most efficient and accurate when it uses a novel
combination of linear programming and simulated annealing.
TOPIC: 4. Dynamic right-sizing for power-proportional data centers

AUTHOR: M. Lin, A.Wierman, L. Andrew, and E. Thereska

ABSTRACT:

Power consumption imposes a significant cost for data centers implementing cloud
services, yet much of that power is used to maintain excess service capacity during
periods of predictably low load. This paper investigates how much can be saved by
dynamically `right-sizing' the data center by turning off servers during such
periods, and how to achieve that saving via an online algorithm. We prove that the
optimal offline algorithm for dynamic right-sizing has a simple structure when
viewed in reverse time, and this structure is exploited to develop a new `lazy'
online algorithm, which is proven to be 3-competitive. We validate the algorithm
using traces from two real data center workloads and show that significant cost-
savings are possible.
TOPIC: 5. Cutting the Electric Bill for Internet-Scale Systems

AUTHOR: A. Qureshi, R. Weber, H. Balakrishnan, J. Guttag, and B. Maggs

ABSTRACT:

Energy expenses are becoming an increasingly important fraction of data center


operating costs. At the same time, the energy expense per unit of computation can
vary significantly between two different locations. In this paper, we characterize
the variation due to fluctuating electricity prices and argue that existing distributed
systems should be able to exploit this variation for significant economic gains.
Electricity prices exhibit both temporal and geographic variation, due to regional
demand differences, transmission inefficiencies, and generation diversity. Starting
with historical electricity prices, for twenty nine locations in the US, and network
traffic data collected on Akamai’s CDN, we use simulation to quantify the possible
economic gains for a realistic workload. Our results imply that existing systems
may be able to save millions of dollars a year in electricity costs, by being
cognizant of locational computation cost differences.

Das könnte Ihnen auch gefallen