You are on page 1of 4

Experiment 1: Cloud conceptualization and Performance evolution of service

over cloud

Cloud definitions

Cloud computing is a model for enabling convenient, on-demand network access to a

shared pool of configurable computing resources (e.g., networks, servers, storage,
applications, and services) that can be rapidly provisioned and released with minimal
management effort or service provider interaction. This cloud model promotes
availability and is composed of five essential characteristics, three service models, and
four deployment models.


NIST offers up several characteristics that are essential for a service to be considered
Cloud. These essential characteristics of cloud are:
On-demand self-service
By on-demand we mean that the end user can provision computing capabilities, such as
server time and network storage, as needed automatically without requiring human
interaction with each service provider.
Broad network access
Broad network access is ability to access the service via standard platforms (such as
desktop, laptop, mobile, tablets etc).
Resource pooling
Resources are pooled across multiple customers using a multi-tenant model. different
physical and virtual resources dynamically assigned and reassigned according to
consumer demand. There is a sense of location independence in that the customer
generally has no control or knowledge over the exact location(e.g., country, state, or
datacenter) of the provided resources. Examples of resources include storage,
processing, memory, and network bandwidth.

Performance evolution of service over cloud

High performance computing is one of those areas that people are skeptical of when it
comes to migrating services to the cloud performance and security are the biggest
concerns, and understandably so.
Lets look at 3 ways in which cloud computing is really evolving in the area of high
performance computing though, and directly address those concerns here.

Service level agreement and reliability:

Some cloud providers offer guarantees for higher levels of service as a way to separate
themselves from the pack. In Rackspace: The Avis of cloud computing, I describe how
Rackspace has higher levels of cloud service SLAs to compete with Amazon.

Factors to consider when defining the terms of an SLA:

Business level objectives:

An organization must define why it will use the cloud services before it can define
exactly what services it will use. This part is more organizational politics than technical
issues: Some groups may get funding cuts or lose control of their infrastructure.

Responsibilities of both parties:

It is important to define the balance of responsibilities between the provider and

consumer. For example, the provider will be responsible for the Software-as-a-Service
aspects, but the consumer may be mostly responsible for his VM that contains licensed
software and works with sensitive data.


Consider how redundant your provider's systems are.

Business continuity/disaster recovery:

The consumer should ensure the provider maintains adequate disaster protection. Two
examples come to mind: Storing valuable data on the cloud as backup and cloud
bursting (switchover when in-house data centers are unable to handle processing


One of the nicest aspects of using a cloud is that the provider handles the
maintenance. But consumers should know when providers will do maintenance tasks:

Will services be unavailable during that time?

Will services be available, but with much lower throughput?

Will the consumer have a chance to test their applications against the updated service?

Data location:

There are regulations that certain types of data can only be stored in certain physical
locations. Providers can respond to those requirements with a guarantee that a
consumer's data will be stored in certain locations only and the ability to audit that

Dynamically Scalable Computing

Scalable computing is the remarkable ability to scale your services up and down as the
need arises. Consider a typical web serving requirement on a dedicated server;
upgrading is an arduous task that involves scheduling downtime, then sending in some
engineers to physically perform the upgrade of additional memory or a CPU swap. This
is impractical to do on a frequent basis, and if additional capabilities are required for just
a short time, is incredibly costly. Theres also a finite physical limitation on the amount of
upgrades that can be done this way; you could keep adding more servers, but again,
this is impractical for short periods of time.

Dynamically scalable computing harnesses the power of virtualized computing

instances in the cloud. These are dynamically scalable in the sense they can be
upgraded as and only when required; and downscaled back again, instantly. This can
be done without downtime; without scheduling engineering work; and programmatically.
You can detect when you need more computing resources, and automatically increase
the amount of computing power available. Its a revolution, simply.

Cost is also a major factor here: to achieve the level of computing power capable with
cloud virtualization in a local server environment would require a huge investment. By
using virtualized cloud computing, not only can you achieve scalable services but youre
also effectively renting them; this represents massive cost savings and avoids
computing power wastage.

For companies with a global presence, you can typically also choose
the physical location of your computing instances, thereby ensuring the best access
speeds to local teams.

Infinite Data Storage

Large volumes of data are the other major consideration, even more so when you factor
in backups and redundant drives. Depending on the speed of access required, they are
various cloud services that will give you infinite data storage at very affordable prices
far more cost effective than storing everything locally. If you simply need a large data
archive and accessing the files isnt urgent, the costs are even lower.

In addition, the task of backups is outsourced to the cloud provider: you dont need to
worry about storing these files in multiple physical locations. One less worry is always