Sie sind auf Seite 1von 61

Vlad Gdescu

Benefits of IPv6 in Cloud Computing

Master of Science Thesis

Subject and examiners approved by the Faculty of


Computing and Electrical Engineering Council on 11
January 2012

Examiners: Prof. Jarmo Harju


MSc. Aleksi Suhonen
ABSTRACT

TAMPERE UNIVERSITY OF TECHNOLOGY

Masters degree program in Information Technology


Gdescu, Vlad: Benefits of IPv6 in Cloud Computing
Masters thesis, 54 pages
June 2012
Major subject: Communication Networks and Protocols
Examiner(s): Prof. Jarmo Harju and MSc. Aleksi Suhonen
Keywords: cloud computing, ipv6

Efficiency is one of the main focuses of the world, today. As the entire world relies on
computers and networks, their efficiency is of utmost importance. Energy, processing
power, storage and data access must all be used and offered in a most profitable and
economical way possible. The majority of companies and businesses cannot afford the
implementation and deployment of huge data centres, for their specific requirements.
Thus, from need of efficiency in both business and IT environment, the cloud
computing idea emerged: online infrastructures in which clients buy or rent processing
power and storage, according to their needs.
During the last year cloud computing has become in the new internet
evolutionary trend. Due to its material and monetary merits, it has gained increasing
popularity. As IPv6 was developed as a response to the limitations of IPv4, IPv6 brings
new features and advantages, which fit into the cloud computing paradigm and provide
means to better develop new techniques from which the modern data centres can profit.
The development and implementation of IPv6 has been underway for some years and
consequently, its improvements over IPv4 are skilfully outlined and acknowledged.
The thesis is based on the research of the advantages of IPv6 over IPv4 and how
they can improve the operation and efficiency of cloud computing. In the first part
background information is presented, giving a brief view on what cloud computing and
virtualization is, as well as a few problems that customer might find in the cloud
computing idea. As the advantages of IPv6 over IPv4 are widely known, but their
benefits for data centres are rarely exposed, the second part of the thesis focuses on how
cloud computing environment can benefit from IPv6 by adjoining IPv6 and cloud
computing. As a result, the thesis tries to portray a better image to both customers and
network administrators, so that they could properly see and understand why IPv6 and
cloud computing should be used together.

ii
PREFACE

The thesis was written as a response to the ever growing idea of cloud computing and
low deployment of IPv6. It was based on personal research done both at the university
and spare time and it is heavily depended on papers published on IEEE and RFCs.
During the writing process I concluded that the background information should be
presented more thoroughly in order to better support the benefits IPv6 bring to the cloud
concept. Moreover, the benefits are presented in such a way that the thesis might be
used as a source to motivate a faster deployment of IPv6 in cloud computing data
centres.
I would like to thank Prof. Jarmo Harju and MSc. Aleksi Suhonen for their
guidance in the writing process and for the formalities regarding the university thesis
process and my friend that help me with proofreading.

Tampere, 29 May 2012

Vlad Gdescu (vlad@gadescu.com)


Str. Carpenului, Nr. 3
500256 Braov
Romania

iii
TABLE OF CONTENTS

ABSTRACT ................................................................................................................... ii
PREFACE ......................................................................................................................iii
TABLE OF CONTENTS .............................................................................................. iv
LIST OF FIGURES ....................................................................................................... vi
LIST OF ABBREVIATIONS....................................................................................... vii
1 Introduction ................................................................................................................. 1
2 IPv4 and IPv6 differences ........................................................................................... 3
3 Virtualization and IPv6 ............................................................................................... 7
3.1 VPNs .................................................................................................................. 8
3.1.1 Traditional versus virtualized ISPs ............................................................. 8
3.2 Virtualization software - hypervisors and virtual networking ............................ 9
3.3 Server farms and potential problems ................................................................ 11
3.3.1 Traffic overhead and load balancing ......................................................... 12
3.3.2 Localization and migration ....................................................................... 13
4 Cloud computing ....................................................................................................... 15
4.1 Software infrastructure versus hardware infrastructure ................................... 16
4.2 Implementations of Cloud computing .............................................................. 18
4.3 Insecurities and problems in cloud computing ................................................. 19
4.4 Future of cloud computing ............................................................................... 21
5 IPv6 benefits in cloud computing ............................................................................. 23
5.1 Security benefits ............................................................................................... 23
5.1.1 NAT avoidance ......................................................................................... 23
5.1.2 IPsec .......................................................................................................... 27
5.2 Network Management benefits......................................................................... 32
5.2.1 IPv6 addressing and interface identifiers .................................................. 32
5.2.2 Stateless approach ..................................................................................... 35
5.2.3 Address validation..................................................................................... 36
5.3 QoS benefits ..................................................................................................... 38
5.4 Performance benefits ........................................................................................ 41
5.4.1 Load balancing .......................................................................................... 41

iv
5.4.2 Broadcast efficiency .................................................................................. 43
5.5 Mobility benefits .............................................................................................. 45
6 Conclusions ............................................................................................................... 49
7 Bibliography .............................................................................................................. 51

v
LIST OF FIGURES

Figure 1. Differences between IPv6 and IPv4 .................................................................. 4


Figure 2. IPv4 header [3] ................................................................................................. 5
Figure 3. IPv6 Header [4]................................................................................................. 5
Figure 4. Fragmentation Header [4] ................................................................................ 6
Figure 5. Non virtualization vs. virtualization .................................................................. 9
Figure 6. Type 1 and Type 2 hypervisors........................................................................ 10
Figure 7. Hardware infrastructure and software infrastructure .................................... 16
Figure 8. Cloud Computing deployment models............................................................. 18
Figure 9. Basic NAT........................................................................................................ 23
Figure 10. AH and ESP header insertion ....................................................................... 24
Figure 11. Authentication Header [3] ............................................................................. 25
Figure 12. IPv4 vs. IPv6 AH authentication. .................................................................. 27
Figure 13. IPv6 addressing [41] ..................................................................................... 30
Figure 14. Migration and IP based authentication ........................................................ 30
Figure 15. Multiple addresses per interface and random generated addresses. ............ 33
Figure 16. Anycast .......................................................................................................... 34
Figure 17. Structure of a data centre and exponential number of machines.................. 43
Figure 18. Mobile IPv4 ................................................................................................... 45
Figure 19. MIPv6 and VM migration ............................................................................. 47

vi
LIST OF ABBREVIATIONS

AH - Authentication Header
ARP - Address Resolution Protocol
AS - Autonomous System
CDN - Content Distribution Networks
CN - Correspondent Node
DNS - Domain Name Server
DSCP - Differentiated Services Code Point
ESP - Encapsulating Security Payload
FA - Foreign Agent
HA - Home Agent
IaaS - Infrastructure as a Service
ICMPv6 - Internet Control Message Protocol Version 6
IKE - Internet Key Exchange
IPv4 - Internet Protocol Version 4
IPv6 - Internet Protocol Version 6
ISP - internet service provider
MAC - Media Access Control
MIPv4 - Mobile IPv4
MLD - Multicast Listener Discovery
MN - Mobile Node
MTU - Maximum Transmission Unit
NAT - Network Address Translation
ND - Neigbor Discover Protocol
NIC - Network Interface Card
OS - Operating System
PaaS - Platform as a Service
QoS - Quality of Service
RFC - Request For Comments
SA - Security Association
SaaS - Software as a Service
SLA - Service Level Agreement
TCO - Total Cost of Ownership
TCP - Transmission Control Protocol
UDP - User Datagram Protocol
uRPF - Unicast Reverse Path Forwarding
VLAN - Virtual Local Access Network
VM - Virtual Machine
VPN - Virtual Private Network
WAN - Wide Area Network

vii
1 Introduction

Internet has evolved in the past two decades at a fast pace, beyond anyones
expectations. It has offered new solutions to businesses and personal development. It
has evolved from an only text based environment to an interactive one with all sorts of
media. In the beginning of this decade, it made a major shift to Web 2.0 and again,
everything changed. It seems that every now and then, a new development perspective
arises that changes the status quo of the virtual world and how we perceive it.
Nowadays it is cloud computing.
Cloud computing is a technology that modifies the way how the businesses have
to think about using the IT resources and the internet. Cloud computing makes use of
virtualization to provide new kinds of services, from software to hardware. That means
that now, commercial organizations can make use of incredible IT infrastructures at
lower costs. Processing power can be accesses based on demand and on budgets. These
all are great advantages, not only from the cost point of view, but also from the fact that
it opens new possibilities of development for companies that could not afford large IT
infrastructures.
Cloud computing is still in its infancy and it is an emerging technology. It
provides great benefits, but it is open to more improvements. New protocols and ideas
can be coupled with virtualization to provide greater value or solve exiting problems.
One protocol that can do this is IPv6. But the IT world is quite reluctant to using new
technologies, which at a first glance do not provide any palpable benefits or advantages.
This can be said also about businesses and business managers who do not see reason to
invest money in something that is not broken. As a consequence, this leads to the
inflexibility of the internet.
The ossification of the internet (inflexibility and reluctance of new technologies)
[1] and the wide spread of IPv4 in all the networks around the globe, made companies,
big or small, unenthusiastic in implementing the new IPv6. A proof of this situation is
actually the low deployment of IPv6 in the internet, specifically in virtual
infrastructures, making this an issue for the companies and the customers too. Security,
privacy, reliability, fast resource provisioning, mobility and other problems focused
now on virtual environments, can be, at some degree, improved or solved by taking the
next step, that being the implementation and development of infrastructures based on
IPv6.
Development has always been forced by some key elements that came at the
right moment. Internet spread all over the globe and made IPv4 developed beyond
anyones expectations. As stated by Peter Loshin this was the killer application for the
older protocols [2]. Nowadays, cloud computing is the new killer application for IPv6.
It can be considered that new internet will evolve around cloud computing and
virtualization. IPv6 will have to be part of this progress, but for that to happen, the
proper advantages have to be pointed out. A clear examination is needed, on why the

1
two technologies will help each other grow. Businesses as well as regular customers
could do with a clear view on why cloud computing makes perfect sense with IPv6.
The benefits of the IPv6 in the Cloud Computing environment have to be
properly outlined and explained. This thesis will show the benefits and the need of
implementing IPv6 so that the services and virtual environments may develop further
more. The new protocol fits much better the needs of the new internet and that means
companies as well as customers need to know the advantages they will get by switching
to it.
The thesis is structured as following: in Chapters two a brief comparison
between IPv6 and IPv4 is made with the outline of the main differences of the next
generation IP; in Chapter three, basic background and concepts of virtualization are
offered; Chapter four presents the cloud computing idea along with information on how
it is deployed in the network with a clear distinction between online hardware and
software; Chapter five shows the benefits of IPv6 in the context of cloud computing and
how they will affect the cloud computing data centres, and finally Chapter six presents
the conclusion remarks.

2
2 IPv4 and IPv6 differences

It is widely known that TCP/IP is the backbone protocol stack with which the internet
grew from anonymity and low coverage, to world wide spread and availability. TCP/IP
went through a period of modifications and testing until it was adopted by the major
players in networking at the time; universities and the military. However, it was
designed in a period in which computer networks where in their infancy. They were
seen as the pinnacle of the computer world, something revolutionary, but none the less,
still not widely adopted. So, the TCP/IP stack started to be used in these small, rather
primitive networks. No one thought that at some time, computers all over the world will
be interconnected. Consequently, due to rapid and unforeseen development of networks,
IPv4 is at a point in which its limits not only have been reached, but there are serious
drawbacks that greatly limit the current internet services.
IPv4 has been developed around the idea of interconnecting dedicated networks,
for example different universities or research centres, government facilities and so on.
At that time, for example, the number of addresses available (232) was seen more than
enough for all the existing networks. Security was not a concern; routing tables and
router performance with respect to IP header processing was not taken into
consideration. However, all of these are now of utmost interest and concern. During the
90s, efforts were put into creating a new protocol that will address all these new
problems that were not foreseen when creating the old IPv4. Thus, IPv6 came to
fruition.
One would think that by now IPv6 would be worldwide implemented. Even
though the internet has passed through different concepts in resource management from
horizontal and vertical scalability to cloud computing, IPv6 is not yet widely and fully
deployed. Figure 1 presents some of the differences between IPv4 and IPv6:
IPv6 has an address length of 128 bits, meaning that the pool of addresses will
be large enough to serve all the present and future hosts on the internet.
Moreover, the addresses are differentiated into block addresses that are meant
for specific functions; they are valid only in specific parts of the networks,
which are identified by the scope: link-local, unique local addresses, global.
Besides the usual unicast and multicast addresses, IPv6 includes a new, anycast
address format.
IPsec is mandatorily supported in the new protocol, an aspect that can solve a lot
of the security issues that arise with careless users.
QoS can now be served directly through a field in the IPv6 header, which opens
new possibilities in how to manage communications and application traffic.
Fragmentation does not happen along the route with IPv6; this is beneficial
because it decreases the work a router has to do in case packet fragmentation is
needed.

3
Checksum is not included in the header; as with fragmentation this will release
some of the burden on the router, since no calculation of the checksum is needed
every time fields like hop limits are changed. However this function can be
appointed to upper layer protocols or ICMPv6 headers.
In IPv6, neighbor discovery protocol replaced the broadcasts and the ARP
protocol, thus reducing network floods through more efficient LAN
communication.
With IPv6, any host on the network can autoconfigure itself. Manual
configuration or even DHCP servers are not needed anymore, unless a certain
situation requires them to.

IPv6 IPv4
Addresses are 128 bits in length Addresses are 32 bits in length

IPsec is mandatorily supported IPsec is just optional

QoS handling through flow label field in No QoS identifier in the header.
the header

Routers do not fragment the packets, just Routers and host can fragment packets
the sending node

No checksum in the header Checksum in the header

IP to MAC resolution is done through IP to MAC resolution is done through ARP


Multicast Neighbour Solicitation broadcast

Broadcast addresses are replaced by link- Uses broadcast addresses to send traffic to
local scope all-nodes multicast address all nodes on an subnet

Automatic configuration; does not require Manual or DHCP configuration


DHCP

Must support a 1280-byte packet size Must support a 576-byte packet size
(no fragmentation) (maybe fragmented)

Figure 1. Differences between IPv6 and IPv4


The differences between the two versions of IP are each meant to improve the
overall performance of the protocol, increase security, mobility and flexibility of the IP
itself. However, as with IPv4, the internet evolved into a direction which could not had
been predicted, thus, nowadays, IPv6 has to envelope the needs of the new internet
paradigm. The idea of cloud computing is quite recent, thus all the improvements were
not meant explicitly for it. Therefore, the benefits and additions of IPv6 have to be put
in context. However, a first step is to present the most obvious differences between the
protocols.

4
Figure 2 and Figure 3 depict the headers of the two protocols. As it can be
clearly seen, IPv6 header is simpler, with fewer fields. Consequently it can be
concluded that it carries much less information than an IPv4 header, even though the
length of the next generation IP is double (20 bytes versus 40 bytes). Some of the fields
that are presented in the headers are however common to the two, but the names differ.
Obviously, the version field indicates the IP version of the header, in the case of IPv4 it
would be 4 and 6, in the case of IPv6.

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Version| IHL |Type of Service| Total Length |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identification |Flags| Fragment Offset |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Time to Live | Protocol | Header Checksum |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Destination Address |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 2. IPv4 header [3]

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
|Version| Traffic Class | Flow Label |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Payload Length | Next Header | Hop Limit |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ +
| |
+ Source Address +
| |
+ +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ +
| |
+ Destination Address +
| |
+ +
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 3. IPv6 Header [4]
Both the type of service and traffic class in IPv6 have the same function, to
differentiate the service classes in QoS techniques. It has to be stated that both these
fields have been modified from their original purpose and now they are used for DSCP
in DiffServ QoS [5], as it will later be presented.

5
The total length and payload length fields define the entire packet length. In the
case of IPv4 these field can indicate a packet of maximum length of 65,535 bytes.
However, IPv6 is meant to carry heavy loads of traffic, much more than was originally
thought in version 4. As a result, in case the payload length field has a value of 0, the
packet will be considered a jumbogram, consequently being able to carry much more
data than the MTU. The maximum value can reach 4,294,967,295 bytes. The time to
live and hop limit are fields that limit the propagation of an IP packet to a certain
amount of time; the time is represented by how many routers the packet has been
passing through; each router decreases this value by one.
The daisy chain concept is a new addition to the next generation IP. It allows a
greater flexibility of the protocol, through the use of different headers that perform only
specific tasks. These headers are added and removed based on the necessity of the data
transmission. The next header filed in IPv6, was created to point to the existence, if that
is the case, of another header behind the one that is processed. Accordingly, a chain of
sequential header can be created, each one having a clear function and a simple
structure. In IPv4, this was possible through the use of the option filed, but this
approach makes the protocols much more inflexible and harder to process.
Headers in IPv4 have variable length, while IPv6 headers are static. Hence, the
IHL field defines the total length of version 4 IP headers, a field that is not needed in
version 6. Flags and fragment offset are used in the case of packets being fragmented at
the source or along the way of the data flow. In IPv6 these were eliminated, because
fragmentation at routers is forbidden in version 6. However, the source of transmission
can fragment the packet. In this case, a new header, which will take the fragmenting and
recomposing responsibilities, will be added above the original IPv6 header, as depicted
in Figure 4.

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Next Header | Reserved | Fragment Offset |Res|M|
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Identification |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 4. Fragmentation Header [4]

6
3 Virtualization and IPv6

Efficiency is always one of the main focuses in the computer world. Resources, storage
and networks must meet high criteria of efficiency and at the same time with keeping a
low degree of coupling between these entities. Virtualization is a concept that made all
of this possible and provided the specialists with the means of high resource use for a
high yield.
Productivity, resource management and cost effectiveness can thus reach levels
that are very tempting not only to IT specialists but to customers and business owners.
Consequently, the virtualization paradigm started changing the existing internet and
created new paths for development. In addition, virtualization can be seen as a
disruptive technology that will drive away the inflexibility of nowadays internet.
It has to be pointed out that virtualization and IPv6 can both profit from each
other. Through a paradigm shift in the internet, IPv6 has now the opportunity to be
faster deployed, but this is not enough. Benefits for virtualization with IPv6 have to be
pointed out and explained in clear manner, thanks to the fact that IPv6 brings more
subtle advantages that are not clearly defined and differentiated form already existing
protocols.
Virtualization can be applied to a broad range of concepts, like overlay
networks; software, in which servers and OSs are created and run in virtual machines.
IPv6 can bring improvement in performance, ease of use and troubleshooting.
Networking equipments burden of processing is decreased through a simple
representation of the IPv6 headers and structures. ICMPv6 new functionalities provide
more efficient ways for hosts to communicate between each other and can prove to be a
solution to the L2 broadcast and scaling problem presented in [6, 7]. Last, but not least,
the compulsoriness of IPsec in IPv6 implementation makes all communications more
secure through proper authentication and encryption, a requirement that should be
mandatory in virtual environments.
Based on the ideas presented in [8], it can be argued that in the future the role of
the traditional ISPs will be modified. Nowadays, ISPs offer not only infrastructure
access but services too. These two roles may at some point be divided and subsequently
create separate entities. That means that the same separation has to be made when
talking about protocols. IPv6 will have a greater impact in the underlying physical
infrastructure of the virtual environment, but it will provide some advantages to the
software component as well, like, for example, better VM migration.
Businesses and potential customers of cloud services will have to make the same
separation based on what their needs are from the cloud services: renting a whole cloud
hardware infrastructure or only software services.

7
3.1 VPNs

Virtualization as a concept has been around for many years [9], but not until recently
has it grasped the whole internet through usage in different services. It can be said that
this idea is still in its infancy and still has to prove itself and expose all the pros and
cons attached to it. Fortunately, VPNs have been around for a longer time, proving their
usefulness and making it possible to extrapolate their benefits to the virtualization
concept.
VPNs can be considered a type of virtualization by offering a method of creating
an overlay, secured network over the public internet. They offer a little bit of a history
lesson and a good example on how virtualization can benefit from implementation of
new protocols. Nowadays VPNs start taking advantage of IPv6 thanks to the benefits
and improvements it brings. It can be assumed that cloud computing will follow the
path of VPN technology and profit from the enhancement IPv6 offers.
IPsec is used in VPNs, but with IPv4, IPsec is harmed by NAT. That makes an
obvious and direct benefit to switching to IPv6. Authentication headers can now be used
thanks to the fact that NAT is obsolete in IPv6 networks and environments.
Furthermore, in [10] it is clearly stated some of the advantages or of
implementing VPNs over IPv6. This outlines a first step toward the goal of this thesis,
showing that the new protocol has a beneficial impact on virtualization.
Cloud computing and VPNs are two concepts that are tightly connected to each
other. The virtual networks provide secured means to connect to remote data centres as
well as an efficient and a robust way for roaming clients to access their services and
data. Therefore, it can be concluded that cloud services benefit from more capable IPv6
based VPNs.

3.1.1 Traditional versus virtualized ISPs

As virtualization increases its presence in internet, new role shifts will take place in the
traditional ISP. As stated before, a proper delimitation will happen between hardware
infrastructure and software infrastructure providers. Infrastructure providers will
manage hardware equipments and will create the underlying physical networks, taking
the role of traditional ISPs. Service providers will offer services, from virtual networks
to different software services, creating a new entity, the virtual ISP.
IPv6 transition and implementation can have a catalyst role in the development
of both of these two new providers. But at present the low IPv6 deployment shows low
resilience in implementing new protocols and comes to support the idea of internet
ossification [1], something that is not adequate in a faster growing online environment.
The ubiquity of virtualization and the fact that it can greatly benefit from the
adoption of the next generation IP, will force the internet to relinquish its old habits.
Consequently, it seems that there is a strong correlation between the future of the
internet, virtualization and IPv6. As seen in [11], virtualization offers great test beds for
development of technologies and offers new ways to use already existing
infrastructures. It can greatly expand the use and possibilities of the internet.

8
The players that will have a place in the development and the future of the
internet will have to understand that inflexibility is not something that is wanted in the
great structures that are todays internet. But, nonetheless, they will have to properly
understand the advantages that new technologies will have to offer. As a result, tradition
and virtualized ISPs will have to know the benefits that IPv6 will bring to their service,
whether it is network wise or software wise.

3.2 Virtualization software - hypervisors and virtual networking

When talking about cloud computing and all that is virtual, everything reduces to two
essentials: virtualization software and hypervisors. This is the corner stone of all that the
Internet is becoming. Without advances in this area, the idea of computing in the cloud
would be unrealizable. Virtualization software gives the chance of multiple OSs to run
on the same machine, it gives the opportunity to clearly separate resources
independently of the underlying hardware, decrease cost, improve management and
most importantly, to increase the efficiency of resource use. Hypervisors and virtual
network adapters are two key software components that made virtualization possible.

Figure 5. Non virtualization vs. virtualization

The hypervisor is software that shares the physical server resources between
several virtual machines Figure 5. It separates the guest OS from the underlying host,
or hardware. It controls the CPU, memory, I/O operations, in such a way that the VM
instances think they can access all the actual resources of the server. Moreover, all
guest OSs work without knowing about each others existence on the same hardware.
Hypervisors are classified into two categories, as seen in Figure 6. Type one, or
bare metal, implies that the hypervisor is installed and operates directly between the
hardware and the VMs. The type 2 hypervisor is set up in an already existing installed
operating system, the host OS. Hypervisors and their advantages to the computing
would be without use if the VM could not connect to the outside world, to the network.

9
Regardless of the type, hypervisors have to use virtual network interfaces in order to
create a connection between the guest OS and the real NIC.

Figure 6. Type 1 and Type 2 hypervisors


Network adapters are present in the virtualized space. They make
communication between VMs and between VMs and outside world possible. But
contrary to the traditional network card, they are not represented by any kind of
hardware. That means that they are implemented and work only at abstract, software
level, meaning that their performance and ability to manage network traffic is
paramount to the optimum functionality of the VM.
Nevertheless, despite its software representation, the virtual adapter is seen by
the guest operating system as a proper physical one. Furthermore, the network, host and
protocols do not make any difference between NICs and its virtual counterpart. That
pushes the importance of the virtual card, in the virtualized network, even further.
Proper studies and tests have to be undertaken before proper deployment of data centres
can occur. Functionalities differ from one virtual network device to another, as well as
performance in the similar conditions. [12]
Both, hypervisors and virtual network adapters lead eventually to the idea of
virtual networks and virtual networking. This allows the VMs and their host operating
system to communicate between each other as if they were using an actual, physical
network. When deploying a data centre that will eventually support a cloud computing
service, proper evaluation of the benefits of certain virtual cards as well as benefits and
cons of the virtualization mode have to be assessed.
Virtualization mode refers to how the guest systems, the host OS and the outside
network will interact with each other. These modes commonly include bridge mode,

10
NAT mode and host only networking. In bridge mode the guest OS will connect to the
physical LAN as if it were an actual machine, thus a transparent and independent mode
of accessing the outside network is possible. NAT mode, offers the same functionalities
as an ordinary NAT device. The guest OSs and the host will all share the same IP and
MAC address. Host only networking refers to the restriction of communication only
between the main machine and its hosting VMs. In this mode there is not interaction
with the physical interfaces, but rather a loopback interface is created that facilitates the
network traffic between the actual machine and its virtual instances.
TUN/TAP is an open source virtual adapter that is used, in some virtualization
software, to implement scenarios as the ones described before. It is composed of two
modules: TUN, that operates at layer 3 of OSI model, dealing with IP packets and
routing and TAP that simulates an Ethernet adapter that controls incoming and outgoing
frames. It is used mostly in Linux based virtualisation software, from tunnelling
protocols (OpenVPN, Hamachi) to virtual machine networking (KVM, VirtualBox).
Other common virtualized devices are: AMD PCNet PCI II (Am79C970A),
AMD PCNet FAST III, Intel PRO/1000 MT Desktop (82540EM), Intel PRO/1000 T
Server (82543GC), Intel PRO/1000 MT Server (82545EM). [13, 14]. As stated in, [13,
p. 88], the virtualization software has some limits that have to be taken into account, in
respect to jumbo frames. This aspect comes to support the idea that proper assessment
of both the network driver, hypervisor and network virtualisation mode have to be
carefully made and correlated with the business plan so that optimum performance is
achieved.
IPv6 and its functionalities have to be properly evaluated based on the driver
chosen or shipped by default with the hypervisor. Only at this layer, IPv6 is able to
effectively demonstrate its new features and advantages to the networking environment.

3.3 Server farms and potential problems

Server farms or data centres represent the base of the new, centralized internet. They are
characterized by large efficiently cooled rooms, in which numerous servers run
simultaneously to provide different services, from pure processor power to data base
storage. Communication between these machines is done through high speed
networking technologies and equipment. This results in potential problems that can
cause the data centres to suffer performance problems, from high latency to network
traffic bottlenecks and data corruption, making proper networking planning mandatory
so this problem is avoided to a high extent.
Different impediments in the data centres must be resolved so that efficiency is
increased and the TCO would be within the planed limits. This is more important for
data centres that support cloud computing services or provide virtual infrastructures.
VMs can reach large numbers, from hundreds to thousands and maybe even more,
hence there is an increased burden on the network through all these virtual machines
communicate.
Virtual server farms share the same potential problems as the traditional ones,
where every single machine was represented by one and only one operating system. But
11
because virtualization brings high efficiency and consequently low idle times for
servers, traffic in data centres tends to be more intense and denser. This correlates with
increased burden on the links that connects the physical server to the routers and the
outside networks. The ratio of VM per server, as stated in [15, 16] is quite high,
reaching to 12:1 or 15:1 and potentially this ratio could increase even more in the near
future. [17]
Both application and network problems can compromise data centres. Even
though these problems have a multitude of causes, IPv6 can bring some advantages that
are worth taking into consideration. Server farms can have I/O bottlenecks especially
with storage and data bases. The same can happen with traffic flow in the case of huge
virtualized datacenters. This creates a commonly occuring problem: congestions. The
new control implementation of IPv6, ICMPv6 can better cope with huge traffic flows.
Also it provides new methods for management, troubleshooting and mobilty, functions
that can greatly improve the quality and reliability of any data center and as a resultto
any cloud computing platform.
As stated before, virtualization can increase the traffic in a data centre a lot,
mainly because each VM is seen as an independent machine, with its own IP and MAC
address. That leads to the overwhelming number of broadcasts used by ARP
functionalities. [18, 19]. This problem can easily be solved with IPv6, through
neighbour discovery and anycast addressing.
In the following subchapters some of the problems commonly seen in server
farms are presents and preliminary solutions are exposed.

3.3.1 Traffic overhead and load balancing

When talking about networking and data centres, traffic has a high impact on the overall
performance of the services offered. TCP/IP packets are a sensitive issue that has long
been treated with utmost respect and consideration. Inexact use and deployment of the
stack can result in lot of data corruption, unnecessary data retransmission, traffic bottle
necks, and high latency. This has lead to many studies that monitored the different
behaviours of both IPv4 and IPv6 protocols have in different network environments.
Form the application point of view, both Ethernet and IP headers are seen as
overhead. Unfortunately the new IP protocol creates a bit of a problem when talking
about this aspect. IPv4 header is at least 20 bytes long while IPv6 header is twice as
large, reaching 40 bytes. When taking into consideration the fact that when IPsec is
used with IPv6, the overhead of the protocols increase even more. This poses a problem,
especially because the majority of packets in the internet are small size packets, in IPv4
[20] and in IPv6 as well [21]. That means that the overhead can compose quite a lot of
the actual traffic, making data transmission less efficient. Furthermore, presented in this
small scale test [22], IPv6 presents an increase in overhead compared to IPv4. This
means, that potentially, the new protocol can create further problems in the data centres
and consequently to the cloud computing idea.
The flexibility of IPv6 is one of its strongest points. It can be customized
according to the customer services that it has to serve and based on the traffic that it

12
handles. For example, the problems addressed earlier can be solved through
customisation of the header. In LAN communications IPsec can be opted out, therefore
reducing the overhead and extra information. Depending on the level of customization
that the provider wants to apply, a solution like the one presented in [21] can easily
avoid some of the problems.
Furthermore, header compression protocols [23] can be used with IPv6. This
might solve the problem of the overhead, and at the same time making use of the
benefits and advantages that the next generation IP provides.
Load balancing is a method that offers the possibility, to current data centres, to
scale their computing power and distribute traffic and processing load across a
multitude of servers. Furthermore, it offers a method to distribute the traffic load across
different segments of networks. One of the methods the load balancing is using a
network balancer; anycast addressing scheme can provide some help as well.
Load balancers can benefit from the new functionality of neighbor discovery,
improving the flexibility and scaling methods for the servers behind the load balancers.
New servers added to the infrastructure can be easily detected. Additionally, as
presented in [24], direct routing load balancing method, involves a problem regarding
handling ARP requests. This issue may find its solution in full deployment of IPv6.
Anycast addressing will reduce the need of the broadcast floods and can actually
improve some of the load balancing methods.

3.3.2 Localization and migration

Politics and government are always a possible problem in regard to any new initiatives
and ideas that are foreign to them. Cloud computing is still in its infancy and it has not
provided yet enough evidence for its benefits to the business world. That leads to
government scepticism concerning data security, potential loss of businesses and the list
can go on. This means that every country has its own rules when talking about potential
cloud services, especially the ones that provide payment or financial services.
Furthermore, big cloud computing businesses usually have more data centres scattered
around the globe, for better coverage and service.
As a consequence, in some cases VMs have to be moved from one physical
server to another or even between two different data centres. In addition, VM migration
can increase the flexibility of the computing processing power in grid computing by
shifting VMs where they are needed. This will allow to dynamically moving virtual
machines between data centres or physical servers to provide specific tasks, improve
performance and resource load balancing.
Stated in [25] and [26] are possible design objectives to achieve flexibility in
data centres, with their associated design requirements. Both of them support the ideas
that migration of VM across different physical servers has to be transparent to the user,
all the data connections have to be maintained all over the migration process, it has to
be done as fast as possible and the destination VM has to be 100% identical to the
source one. In [26] and in [27] mobile IPv6 is proposed as a solution for better transfer
of virtual machine, at the same time with keeping the migration requirements.

13
Furthermore, in [28] the migration of the virtual machines is done along with the
persistent file that is used by the VM; usually a file of greater size that is stored on the
local servers.
In two of the above examples, IPv6 benefits are already exploited by
implementing the mobile IPv6 for data transfer between data centres. It can be argued
that mobile IPv6 use can be improved even more by implementing IPv6 QoS techniques
in the home agent. This will provide better ways to deal with large amounts of VM
migration and management. In [28], the migration of the persistent file may be
improved by the use of the jumbograms that are a feature of the new IP protocol.
All in all, virtual machine migration is one of the main problems that are present
in the virtual data centres around the globe. Live migration is a great solution for
providing the most flexibility in data centres, but also provides the means to offer to
roaming customers the best services by moving the VMs physically as close as possible
to them. Consequently, through the next generation IP, new advantages can be added
and utilized.

14
4 Cloud computing

Cloud computing is used more and more often in the online environment and it can, at
times, be a confusing term. Technically, it involves any kind of resources, software or
hardware, which are created and sold as services, by third parties. Roughly, that means
outsourcing IT infrastructure to specialized companies that will offer processing power
or software, based on customer demand and budget.
The term cloud comes from the fact that in different diagrams, the internet was
always depicted as a cloud, through which all the smaller components, as nodes, hosts,
smaller networks, would communicate with each other. That implies that cloud
computing always involves the use and the need of an online component. Subsequently,
resources are always accessed remotely through the use of the internet.
Many people confuse the idea of virtualization with cloud computing. It has to
be clearly stated, that virtualization is not cloud computing. Instead, virtualization is just
a technical method to create an abstract entity from a physical one. As stated before in
the thesis, virtualization created the idea of virtual machine and virtual network.
Accordingly, it can be said that virtualization is just the means through which cloud
computing is implemented and offered as a service.
As stated before, cloud computing is all about services provided remotely and
independently from the customers IT infrastructure. The computing part refers to
services that are sold over the cloud. Hardware infrastructure can be sold to
customers, who in turn can customize it as they want, without being worried about
managing and troubleshooting the data centre itself.
Software is also an important part of the cloud computing idea. As it will be
presented, when talking about software in the cloud, there are two different approaches
that imply different levels of interaction with the underlying infrastructure; one with the
possibility of creating your own applications and the other granting access to the ones
that are predefined.
The possibilities that cloud computing bring to the businesses communities,
from customizable infrastructures to customizable cloud-based software are hard to
ignore. This means that the false starts [8] that in the last years was a barrier and a
mirage in front of different services like, multicast, security and differentiated services
are now becoming an incentive in cloud computing. What is even better is that cloud
computing relies on these services to grow.
Popularity of cloud computing is growing very fast. More and more companies
chose to outsource their IT needs to different companies around the world. The
consequence is that, in the near future, cloud computing can grow beyond its
capabilities and crush under its own success. IPv6 deployment has been slow up until
now and maybe it will be in the future too, but The introduction of IPv6 is envisaged
as a solution, not only because of its huge address space, but also because it provides a
platform for innovation in IP-based services and applications. [34]. Thus, the

15
innovation that IPv6 can bring to cloud services, can truly launch the implementation of
IPv6 at an increased the pace.

4.1 Software infrastructure versus hardware infrastructure

In Cloud Computing, infrastructure can be divided into two parts: hardware and
software. The reason behind this separation is that both can be sold as services or
products. Cloud computing is all about offering some kind of outsourced infrastructure
to companies that seek to be more efficient with their IT budgets or those that cannot
afford it. Furthermore, based on the level of customization of these infrastructures, three
building blocks can be defined for any cloud computing service: IaaS (Infrastructure as
a Service), PaaS (Platform as a Service) and SaaS (Software as a Service).

Figure 7. Hardware infrastructure and software infrastructure

Infrastructure can be defined as the underplaying structure that offers and allows
upper layer services to perform their tasks. It allows the interaction between different
entities using the same language or architectures. Figure 7 depicts the two
infrastructures that make up the cloud computing concept.
Online hardware is a collection of online physical or virtual resources that are
accessed through a remote connection. They are represented by fully deployed
networks. As it will be presented in the next chapters, some implementations of cloud
computing, like private clouds, allow the use of resources through actual hardware rent.
Physical servers, virtual servers or entire fully operational networks are rented by third

16
parties to the companies that need them; they are data centres, outside the clients
organization, that are maintained by a specialized firm.
When talking about online hardware resources one must take into account all the
components and the SLA that the provider will render. The servers, architectures and
protocols used, all contribute to a stable, scalable and efficient service. It needs to be
mentioned that the benefits of IPv6 are mainly seen at this lower level: the underlying
hardware and network that comprise the online hardware. Moreover, the online
hardware encompasses the virtualization process that takes place in these environments.
Virtual machine managers, hard disks, network adapters and other virtualized hardware
are very important in the whole picture of cloud computing.
Virtualized hardware is very important in the overall performance of the services
and new technologies can improve their operation. Outlining properly the advantages
will not only lead to improving the hardware layer, but the online software layer will
benefit from the new protocols and hardware development as well.
IaaS, or infrastructure as a service, is the concept that turns the above motioned
hardware resources, either physical or virtual, into a product that can be marketed.
Through IaaS, cloud computing vendors can sell processing power, either offered as
simple servers or as virtual resources (virtual machines, online storage, etc.) to different
customers. In turn the customer has access to its own hardware infrastructure and has
the option to modify and use it as he sees fit.
Online software is the corner stone of cloud computing. It is the layer that
provides the most services and functions and behaves as normal, locally installed
software. To further expand, all the software programs that can be accessed remotely
through a web browser or any other kind of remote connection technology, and behave
as any other piece of local software can be considered online software.
But we have to make a step deeper into the concept and split the online software
into two branches. Providers can offer predefined software, for example Google Docs,
Zoho and many more. Here the customer can only use and customize to some point the
services that are already available. This approach to cloud computing is characterized
by the SaaS or software as a service. On the other hand, services such as Google Apps,
give more freedom for the customer to create their own application based on the
existing tools offered by the provider and the specifications of the underlying software
infrastructure. The customer has access to online databases storage and usage, website
hosting, mobile software support any many more elements that he can use in its custom
software. In other words, the customer has access to a platform on which he can build,
according to some specifications, software. PaaS or platform as a service is the concept
that turns custom online software into a product.
In the online environment all the elements interact with each other through,
simply put, networks and dedicated online connections. It can be thus deducted that the
performance of the networks that support the online computing service is of critical
importance. We have to be always aware of the fact that all data exchange is done, if not
all the time, through remote connections that are influenced by the networking
protocols. Specific online software needs to have the best performance when accessing
remote databases or any other kind of data. Hardware has to be able to process very

17
efficiently the different protocols used in the communication between different physical
machines and virtual machines. The benefits of newer protocols cannot be ignored and
this is the case of IPv6, given its potential to improve the overall performance of virtual
networks and the services that they provide.
Even though online software is the most important component in cloud
computing and online hardware is the transparent one, often unseen by the user, the
execution of the software depends on the performance of the underlying hardware
infrastructure, which in turn depends heavily on the machines and protocols they use. It
has to be stated now that all the work will evolve around the hardware part and the
protocol of the underlying network that serves the software component.

4.2 Implementations of Cloud computing

Cloud computing offers online infrastructure that customers can adapt and use as they
see fit. But to further understand the impact of a new protocol over these infrastructures
we have to differentiate and detail the models in which cloud computing can be
implemented. The deployment model of an online computing service can be divided
into 4 categories: private cloud computing, hybrid cloud computing, cloud hosting and
the most commonly used, public cloud computing

Figure 8. Cloud Computing deployment models


Private clouding or internal clouding, as it is depicted in Figure 8, is one of the
simplest models. It implies the uses of all available infrastructure by only one customer
who can choose to host the data centre internally, in its own organization, or it can
choose to be managed by a third party company. It the latter case, the customer will rent
the cloud infrastructure, from an IaaS vendor. Consequently new development in the

18
computing and networking technologies can be implemented easily than in the other
models; the customer is able to enforce and adapt the cloud infrastructure, from the
actual physical servers and data connection to the protocols, software and security, as
they consider suitable.
One can argue that the private model lacks all the benefits that cloud computing
brings to the IT world: on demand computing power, lower cost of ownership and
flexibility. In some cases this may be true, but one has to be aware that this type
provides the best security and thus it can be a first, timid step of a company towards
cloud computing and its full benefits. In addition, private data centres offer the proper
ways for big corporation to support the cost of such deployments, to implement its
internal security politics and benefit of full security standards.
Hybrid clouding, as the term implies, it is a service that merges two models into
one: private and public service. This model offers a choice for the customers that want
to reduce their IT service cost, by outsourcing a part of their IT infrastructure. This
model can encompass all the building blocks of cloud computing: IaaS, Paas and SaaS.
For example the user can choose to rent private hardware infrastructure for certain
purposes, use a cloud platform to create custom software and deploy them on the
infrastructure of a Paas vendor and use SaaS for email or document editing.
Cloud hosting is external to the customer company and it provides the most
flexible and budget friendly model. It is characterized by the possibility of renting
virtual machines on a need basis. To proper understand the concept, Amazon AWS is
such a service, in which a potential user can rent different VMs to perform any kind of
job. Depending on the service, the VMs are rented based on hour usage, VM
performance, traffic or other options. In addition to virtual machines, online storage can
be bought and used. After the customer has finished using the rented resources, these
are freed and made available for other purposes. Cloud hosting is another approach that
IaaS can take. However, the entire physical infrastructure on which the cloud hosting
service relays on is outside of the customer reach.
Public cloud is a service that is generally available and requires the least
knowledge about IT. This model is the most popular one amongst home users, because
it provides the access to basic software and services, for example Google Docs, Google
Calendar, Gmail, etc. Furthermore, public cloud service like Google Apps or Zoho
Creator offer tools to create user specific application and deploy them on the cloud.
However the user does not have the possibility to access and configure the hardware or
software infrastructure in any way. As a result, public clouds can encompass the PaaS
and SaaS building block and provide service free or, over a certain user, or traffic quota,
a fee can be applied.

4.3 Insecurities and problems in cloud computing

In the context of new ideas and concept arising, people tend to be reluctant to accept
them. Scepticism and insecurity take a hold on their whole rational thinking and
adventurous driving force. When these two emotions intersect the business
environment, where risk, profits and even social status come into question, the issue at
19
hand further amplifies, arriving at a point in which new ideas are not only put aside, but
are rejected as a whole and not even taken into consideration.
The cloud computing business has seen this happening over and over again.
Many companies are still reluctant when considering moving their IT infrastructure, if
not all into the cloud, at least a part of it. Moreover, some of them are not even aware of
the concept or about the fact that they are using some sort of cloud services, as stated by
the president of Trend Micro, Dave Asprey in a survey about insecurities of cloud
computing: On top of that, some respondents didnt even know they were using the
cloud, much less securing it.
However, we have to be unbiased and acknowledge the fact that some of the
problems put forward by several companies have real substrate. Businesses and their
success are based on the confidentiality and security of their data. Cloud computing,
through its definition, means outsourcing your data infrastructure to a third party
company. Hence, all your data security is in the hand of a company outside your
companys policies, a company that may not implement and deploy the best methods to
protect your data integrity and confidentiality.
In a survey made in 2011 by Trend Micro [29], 43% of the surveyed companies
had at some point security issues with their cloud computing providers. Furthermore,
the article presents another interesting aspect pointing out another big concern. While
security is still a problem, another arises in the form of performance and availability.
The percentages presented, 50% of companies concerned about security and 48% about
performance and availability, create a grim picture for the future of cloud computing.
Data in the cloud is vulnerable without encryption. [29] In addition,
companies encrypt the data they store in the cloud and tend to choose services that offer
encryption in their offers. So, it can be determined that security concerns in the cloud,
are strongly related to the need of powerful encryption of the data stored and reliable
encryption key management. One of the problems that arise when talking about the
encryption keys is the safe exchange, due to the fact that this is done usually over unsafe
environment, like the internet. IPv6 provides a safer networking environment for
encryption key exchange, through its authentication and encryption functionalities,
defined in IPsec along with the framework protocol define by ISAKMP.
Presented in this article [30], it is pointed out that security problems often can be
attributed to unaware or unprepared customers, as well as to the provider itself not
deploying enough security methods. Amazon cloud computing services business plan
offers means for customers to create personalized VM and make them available for
other users. This creates the possibility of many security breaches and data theft, due to
inefficient and incomplete removal of sensitive information, before making the VM
image widely available for other customers. One problem presented is the exploitation
of SSH keys. [30, p. 396] One solution to this problem is to restrict the IP addresses that
can access a certain VMs. With IPv4, that is deployed in the Amazon infrastructure it is
almost impossible to achieve such a solution. But, with IPv6, this issue can be solved.
Furthermore, the authentication, which is fully functional with IPv6, can offer
protection against unauthorized access to deployed VMs.

20
Presented above is one the benefits that deployment of IPv6 can bring to a lot of
cloud computing services around the internet concerning security. Performance
concerns also have a solution in the next generation IP. NAT avoidance and the
flexibility of the IP headers, can improve the latency and overhead of the network, as it
will be presented later in the paper.
IPsec, tunnelling, data integrity and security will all benefit from full
deployment of IPv6. Insecurities about migrating to cloud infrastructures and services
will diminish once cloud vendors will fully implement and make available IPv6 and
understand that IPv6 will make solutions that are not possible with IPv4 possible.
Furthermore, as customers and businesses realise that new technologies offer better
tradeoffs that older ones, as with IPv4 versus IPv6, the grim sheet that covers cloud
computing insecurities will not be as concerning as before.

4.4 Future of cloud computing

Digital data is nowadays ubiquitous. The quantity that is processed and managed
everyday grows exponentially, above values that can turn out to be unmanageable by
small or medium companies. All fields of work require more and more data
manipulation and process. That means, soon businesses will not be able to afford to
storage and manipulate the data that they need, unless it is in an efficient and cost
effective manner. Thus, cloud computing can be seen as a pertinent solution, one that
sooner or later will be adopted by all the players in the information environment and
more.
The cloud adoption at general scale is imminent; its services will be used by
more and more entities. In a survey made by KPMG International in 2010 [31], it was
shown that the companies level of interest to incorporate cloud services into their
business plans is increasing. Moreover, the survey presented bank and government
institutions as being reluctant to the idea. However, in 2011 the US government issued a
Cloud First policy [32] as a way to decrease the cost of IT infrastructure and at the same
time to increase efficiency and ease of implementing new IT structures. As a
consequence, a programme has been developed to accelerate the US government into
adopting cloud technologies. [33] All of these come to support the idea of sudden
change and adoption of formerly reluctant entities, regarding outsourcing IT and using
cloud computing services instead.
Author Christopher Barnatt, defines four categories of companies in one of his
book [34], based on the adoption of cloud computing: pioneers, early adapters, late
adapters and laggards. The peak of companies switching to cloud service, according to
the author it will take place between 2010 and 2020, these being the early and late
adapters. Correlated with the information presented above, with government trying to
reduce cost and to create more efficient IT infrastructures, we can assume that online
infrastructure, hardware and software will know a massive boom of customers.
The growth of interest in online services will put great pressure on the security,
performance, availability and data networks. This means that the possible upgrading of
cloud computing, regarding any of the fields presented before, has to be not only taken
21
into consideration, but actually thoroughly examined for the possibility of
improvements. Cloud services will experience more pressure coming from their
customer, and it can no longer afford to postpone the adoption of new technologies,
IPv6, being one of these.
The future relies on cloud service vendors offering the best products with the
introduction of the most efficient and reliable advanced technologies and on the
customers who, nowadays, seem to be more aware of the better tradeoffs cloud can
bring to their businesses. The benefits that IPv6 can bring to cloud computing can also
help companies plan and develop cloud computing policies for their businesses.
Therefore, IPv6 can not only help develop the technical performance, but also improve
the view on cloud computing and reduce the uncertainty about security, privacy and
performance.

22
5 IPv6 benefits in cloud computing

5.1 Security benefits

Insecurity is one of the critical issues that generate reluctance to potential customers to
make use of cloud computing. The fact that storing your sensitive data into the cloud
might create potential losses for business due to low level of security should force cloud
computing providers to deploy all the necessary methods to provide high security. Due
to the limitation of IPv4, some of the techniques created to prolong the life of IPv4 can
prove to be, in some situations, a barrier towards proper safe data transmissions. IPv6
has the potential to create a safer environment in which data can be exchanged easily
with no down side for security.
The mechanism developed to slow down the IPv4 address exhaustion, network
address translation (NAT), can be considered one of the main obstacles that inhibit
proper security in the cloud and stop the deployment of addiction protection measures.
The protocols that assures secure IP communication, IPsec, through data authentication
and encryption, is the main victim of NAT protocol. As it will be presented, by avoiding
NAT new possibilities arise when deploying IPsec; the cloud computing customer will
have better security options at its disposal.

5.1.1 NAT avoidance

IPv6 came into existence with the idea of bringing new and advanced features that can
solve some of the problems that IPv4 is facing with. We can start talking about the
benefits that IPv6 brings to cloud computing from the most basic and obvious change in
the protocol: the huge pool of addresses: 2128. The sheer number of addresses that are
available not only solves the critical problem of running out of addresses but brings a
few advantages regarding other aspects of networking, for example, NAT avoidance.

Figure 9. Basic NAT

Figure 9 depicts the basics of a NAT router or any kind of NAT device
deployed in a network. In this example the computer situated in the internal network of
a company wants to communicate with a computer or server situated somewhere on the
23
internet. As it can be seen, the internal computer has assigned to it a private IPv4
address. To communicate, the internal computer sends its packets with the source
address of 192.168.32.2 and destination address of the foreign computer, 198.51.100.2.
But because the source address is not routable on the internet, the NAT device translates
this address to a public one, in this case to 198.51.100.1 and sends the packet further
along the way. Now, the original packets source address is of the NAT interface that
faces the internet, but keeps the destination address. The server responses to the request
at the address of 198.51.100.1 not knowing that solicitor is not represented by the same
address. When the packet reaches again the NAT device, it will do the reverse process
presented above: it will translate the public address into the private address of the
original solicitor, 192.168.32.2 and send it accordingly, to the internal network
computer.
The importance of properly understanding the basic functionality of NAT
resides in its effects on the security aspects concerning IPsec. Through address
translation, parts of the original packets have to be changed, which in turn makes the
use of some features of IPsec impossible. Authentication and encryption are the two

Figure 10. AH and ESP header insertion


building blocks behind IP security. Authentication provides data integrity and validation
of the source, while encryption provides confidentiality and ensures no data
manipulation. This is done through AH and ESP headers that are added to the original
packet. In addition, IPsec has two modes, tunnel (host to gateway communication) and
transport (host to host communication) mode, which implies two different modes of
header insertion, as presented Figure 10.

24
The authentication process implies creating a unique identification number,
commonly a hash, based on some non mutable parameters, one of which is the source
address in the IP header preceding the authentication header. The hash is added to the
AH header; the receiver of the packet will apply the same algorithms on the same
parameters and it will compare the resulted hash with the one received. In the case they
do not match, the packet will be discarded. Going back to the principles behind how
NAT works, we can clearly see that if the send-packets IP is modified in any way along
the route, the authentication will fail at the receiver. In consequence, when using the
authentication mechanism provided by the AH header along a route that at some point
has a network translator mechanism, the communication will fail. So, through NAT
avoidance, full IPsec functionality can be achieved.
AH and ESP headers offer almost the same functionalities with respect to
authentication, hence, it can be said that there is not much loss in the case of failure
when using the AH header in cloud computing communication. However, this is true to
the point that ESP provides authentication for only a part of the original packet. The
authentication is made only for the tunnelled packet, and does not take into
consideration the headers that are outside the tunnel, as it can be seen in Figure 10. On
the other hand, AH extents its protection on the IP and all extension headers (even the
hop-by-hop ones) that precede it, regardless of the transport type applied.
In Figure 11 the authentication header and its fields are presented. The integrity
check values or ICV is an integral multiple of 32 bits and it carries the authentication
data for the packets to which it was attached. The values are created based on non-
mutable headers; headers that are known will not change their filed values along the
route to destination. These values are use in a hash function and create a unique
authentication number. Based on this, the destination can verify the integrity of the
packet and that it was not modified along the way. As stated before, if a NAT device
exists along the route, the values in the non-mutable field, especially the source IP
address, will change and thus the integrity check will fail. It can be observed that in this
case, no real end-to-end IPsec connection can be achieved and consequently neither
complete security.

0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Next Header | Payload Len | RESERVED |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Security Parameters Index (SPI) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number Field |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| |
+ Integrity Check Value (variable) |
| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Figure 11. Authentication Header [3]
NAT is still widely employed due to the ubiquitous nature of IPv4. However, the
exhaustion of addresses imposes the use of network translator in cloud computing

25
environments. For example, Amazon EC2 does not provide unique, routable IP
addresses to the VM instances in the cloud, but rather uses NAT mechanism to map
routable IP addresses to customer accounts or virtual private clouds [35]. One of the
benefits of cloud computing is the possibility to access data from everywhere,
regardless that is from inside the corporate network or from an outside host. As a result,
not being able to have a proper end-to-end connection with the cloud services or virtual
servers rented in the cloud can have consequences on the security, confidentially and
integrity of the data stored in the cloud and the functionality of IPsec.
Security and data integrity is offered only on a part along the way. Amazon, for
example, ensures security through VPN gateways. All security is done between the
customer gateway and Amazons gateways or customer hosts and Amazons gateway,
so no end-to-end authentication is taking place. The traffic between the cloud service
provider gateways and the final virtual machine, host, etc. is left unprotected from the
point of view of authenticating the original source. Furthermore, even in tunnel mode,
the information outside is not authenticated. Stated in RFC 4306 [36, p. 6] it is clear that
end-to-end security in not fully applicable to IPv4 networks.
IKE is an important part of IPsec and therefore of IPv6. It is a protocol that helps
the negotiation between two peers of certain security features, like cryptographic
algorithms and private keys for ESP and/or AH security association. In some case, when
NAT modifies parts of the TCP/UDP header, as the source and destination port, IKE
functionality is impaired by the network translator. For communication, ESP uses port
500 to identify itself. However, ESP encrypts the upped headers of the IP packets, as it
can be seen in Figure 10, including the TCP/UDP header. Thus, NAT cannot see the
source and destination port, needed in the case of NAT with port translation; resulting
into improper manipulation and forwarding of the incoming packet.
The solution to this problem is offered in [37], through and UDP encapsulation
of the already tunnelled packet. However, this can be treated as an example of a solution
to a problem that may be easily treated with IPv6. Furthermore, the NAT problem and
the solution offered with the UDP encapsulation of IPsec tunnelled packets, increases
the complexity of a system which is already highly complex: the virtual environment,
the data centres and the networking sub-layer on which they all rely for proper
functionality.
NAT avoidance, at a first glance, does not seem to provide a substantial benefit.
However, as the next chapter will show by eluding NAT from the data centres
topologies, IPsec will become bring new security possibilities. Furthermore, cloud
computing providers have to be as transparent as possible towards the client. As a
principle, it would be ideal that no client-traffic manipulation should be done. However,
nowadays this takes place due to technology limitation, a problem that forced the
creation of the NAT protocol. This imposes a fast transition to IPv6 to take advantage of
its benefits. But, until the IPv6 will not be adopted on a large scale, customers and
provides will have to use techniques of transition, which in this case involve the use of
NAT on some scale. Conversely, as it will see in a later chapter, in some cases NAT
will still has to be used to provide some services to data centres. NAT avoidance is most
benefic when talking about IaaS infrastructure and not PaaS or SaaS
26
5.1.2 IPsec

Security on the internet is one of the great concerns in the online environment. It is also
a great barrier for certain companies to embrace the cloud paradigm. The worries that
the data stored and transmitted between the customer and the cloud provider are not safe
and well protected, still dictates the choices of a lot of companies and potential
customers. However, with the development of IPv6, IPsec comes to offer the means to
protect the data in the internet.

Figure 12. IPv4 vs. IPv6 AH authentication.


IPsec is a protocol developed to serve IPv6. But, because of the slow
deployment of the next IP, IPsec was adopted for IPv4 as well. As one would expect,
IPsec cannot fully work with IPv4 and the existing networks, in which new patches
were created to slow down the problem of address space. Furthermore, due to these
needed techniques, proper end-to-end security of IPsec can be broken, as it can be seen
in Figure 12. Therefore the issue of the IPv4 cloud-provider customer not having any
idea where between the points along their connection proper encryption and
authentication is made arises. Pragmatically, they do not even have to know. This is a
big problem, because basically a user can access cloud services from anywhere, not

27
only from the protected environment of intra-networks of the companies. It is
mandatory, that proper security is offered to the regular user, as transparent as possible
and as close as possible to the two end points of the internet connection.
It is said that it is easier to hack one computer than one data centre. Thus, the
security of virtual private cloud or cloud infrastructures is as strong as the weakest host.
Proper security has to be offered equally to all entities that are present in the data centre
of the cloud provider. It can be accomplished through IPv6 deployment in which
support for IPsec is mandatory and can function at its fullest.
When talking about IPsec a few mandatory concepts have to be noted. IPsec
makes use of three important databases on which all the functionality is based: security
policy database (SPD), security association database (SAD) and peer authentication
database (PAD).
SPD stores and specifies the services that are offered to IP datagrams.
These services are: bypass, discard and protect and are separated into
three different data bases: SPD-I, for inbound traffic that is bypassed or
discarded from IPsec, SPD-O for outbound traffic that is bypassed or
discarded from IPsec and SPD-S for securing traffic which is identified
based on certain terms (selectors: remote IP, destination IP, e-mail, DNS
name, NAME). [38, p. 19]
SAD stores the security association (SA) parameters that were
negotiated between peers: cryptographic keys, algorithms used for
encryption, etc. [38, p. 34]. It is linked to SPD to provide appropriate
IPsec handling.
PAD the database specifies the peers that are allowed to negotiate
IPsec SAs with the host. [38, p. 43]. It makes a connection between IKE
and SPD.
Based on the above databases and on the fact that they work as an access-control
list or firewall [38, p. 21], it can be inferred that IPsec can offer not only data integrity,
confidentially and security, but also data filtering. The huge address pool that IPv6
offers, the possibility of uniquely identifying a host and NAT avoidance give the
customers and the cloud computing providers more options to secure their data
communication and data centres.
NAT avoidance, when seen in the context of IPsec, also helps resolving the
problem in which the same security associations is offered to other computers. This may
easily happen in data centres where the flexibility of the host is greater, going on and off
the network. Security associations (SA) which use remote IP as selector can confuse the
original SA peer with the new host to which the same address was assigned by a DHCP
server, as stated in RFC4301. [38, p. 36] Even if this problem can be avoided through
proper mechanisms, through the use of IPv6 and IPsec, innately this would not occur.
As stated before, with IPv6 and IPsec proper end-to-end authentication can be
achieved and thus the functionality of the AH header can be used at the full potential.
Man-in-the-middle attacks can be, consequently, efficiently contra attacked.
Furthermore, in most cases, data has to be only authenticated, no encryption needed.

28
This reduces the performance stress on the VPN gateways or IPsec hosts by excluding
the use of heavy encryption algorithm. Moreover, as cloud computing services are
available all across the globe, AH authentication may cover a legal issue as well. Some
countries have strict laws concerning encryption and transferring encrypted data across
the borders and others have even tried to ban encryption as a whole [39]. However,
authenticated data has no restrictions whatsoever, therefore, for cloud computing
services it would be the perfect security trade off. Depending on the customer policies,
data stored in the cloud and its sensitivity, AH authentication will be best to use from
both performance perspective and legal issues as well. On the other hand, in case of
need, IPsec offers the possibility to negotiate IP payload compression, which can reduce
the needed processing power. [40]
One may argue the solution of authenticating and encrypting the traffic of every
user that accesses the services of a cloud provider may be unrealistic. However, IPv6 in
conjunction with IPsec can offer a very grain or coarse data filtering depending on the
needs of the user. In case a customer rents a virtual private cloud (VPC), he can easily
custom configure the criteria on which data is authenticated and/or encrypted or
discarded. When talking about the IPsec security, it has to be noted that the security
associations that are created between peers or host and VPN gateways are only one
direction and separate for AH and ESP. That means, that, for example, traffic can be
only encrypted when send from the server to the client and only authenticated when it is
from client to server. Security can be asymmetrically deployed, different security
measures being applied to different datagrams. Firewall-like capabilities of the IPsec,
offer through the databases presented above, implies the fact that a lot of the traffic can
be discarded as well, if certain criteria are not met. Furthermore, VPN tunnel mode can
be used to create a connection between the VPN gateway of the customer and that of the
cloud computing provider and only the traffic that comes from outside this tunnel to be
authenticated and/or encrypted. The fact that authentication can be used as it was really
intended by IPv6, by even authenticating the IP address, give the providers and the
customers a broader range of possibilities to deploy security measures.
Authenticating the IP header of the packet, especially in the case of the source
address, opens a new and more direct way for identifying a host as friendly or not. As it
was stated before, cloud providers, like Amazon, deploy security implementation in
which VPN gateways are use. These are configured by the client. Because an IPv6 host
has a unique address and does not need to be behind a NAT device for conserving the
address pool, it can be very easily identified and offered security tailored to its needs.
Also, it can give the provider and/or to the customer a detailed way to discard or secure
traffic coming from a certain host or client. Furthermore, in case a host inside the cloud
data centre is infected or compromised and starts behaving accordingly, it will be easier
to spot and ban its access to the internet and intranet.

29
With IPv6 all host on the internet may have a unique address. However, due to
the way the addressing scheme works, some portions of the IP may change when
moving from one ISP to another, or when even migrating within the same network. This

Figure 13. IPv6 addressing [41]

will make it a little bit harder to identify a user or host based on the IP. Figure 13
presents the way an IPv6 address is divided and what is the meaning of each group. Site
Prefix and the subnet ID are parts of the address that may change when a host leaves a
network and joins another one. However, the last 64 bits of the address are generated
based on the MAC of the network card the host uses. This part will always be the same,
no matter where or in what network the host in question resides.
The SPD used by IPsec use some selectors to identify characteristic of the
datagrams, such as: source address, destination address or NAME. The NAME selector
was created especially for hosts that roam around other networks, road warriors that
exist outside the parent network [38, p. 28]. Based on the settings of this selector and
the source IP address, a host can uniquely be identified and thus properly treated. Like
stated above, this will be a way to identify a host that is outside the trusted network of a
company. It also provides a method of identifying traffic that has to be treated different
that the one that flows through a trusted VPN (created between a customer and its cloud
provider).

Figure 14. Migration and IP based authentication


Figure 14 presents the case of such a road warrior which moves from ISP 1 to
ISP 2. As it can be seen the IPv6, address changes as the host move, but the last 64 bits
remain unchanged. Furthermore, these 64 bits should be unique to that host and must be
treated accordingly. Since IPsec does not impose the use of any specific algorithms for

30
authentication or encryption, the solution to uniquely identify a host over the internet
can be implemented by the cloud service provider or developed as new standards. For
example, a host can be identified based on the NAME selector and the unchangeable
portion of the IP address. This will give a more direct way of controlling who can
access certain data in the cloud. Moreover, the byte string can also be stored in the
NAME selector, which can be a hash that is based on the last 64 bits of the IP and email
address, computer name, user name etc. This provides a method not only to authenticate
the packet, but to actually provide a 1:1 connection between the interface and the user
that had accessed the cloud resources.
However, these methods can be hard to adopt when talking about big companies
and big cloud providers. Authenticating every connection in this way, in an end-to-end
fashion may by contra-productive. Cloud providers, as in case of Amazon usually offer
the services of VPN gateways. These gateways will be more appropriate to authenticate
and check the identity of the users and, if needed, apply security mechanism on the
datagrams. But, the advantage of the authentication method presented before, is that in
case of need, certain host or VM in the cloud that may hold very sensitive data may
identify every user. Small companies, that are more inclined to benefit from the
financial advantages of cloud computing can, however, implement some traffic filtering
in IPsec, based on the unique addresses that every host has. For example, companies
with 10-20 computers can easily give access, based on the IPv6 address, only to those
hosts in an end-to-end manner. Furthermore, this security implementation is easy to
deploy and more importantly it is transparent and independent of the upper layer
applications.
Cloud computing companies must have a great concern about the inner security
of their data centres as well. They should try to secure the traffic that is exchanged
between different machines, VPS, etc. as well as the control traffic. One requirement for
the security policies can be the authentication of the peers that initiate a conversation. In
[42] threats for networks are divided in three planes: management plane, control plane
and data plane. ICMPv6 is a protocol that is a part of the data plane and that it is used in
case of neighbor discovery protocol.
The Neighbor Discovery (ND) Protocol is responsible for router and
prefix discovery, duplicate address detection, neighbor reachability and link-layer
address determination. NDs function is similar to IPv4s ARP and some ICMP
functions. The autoconfiguration mechanisms depend on ND. The greatest advantage
of ND and IPv6 auto configuration on IPv4 is that they are entirely IP based, as opposed
to IPv4 ARP or DHCP that are link-layer protocol dependent. As a consequence IPsec
AH or ESP could be used to authenticate or secure these protocols. While it is possible,
currently this is not the practice. [43] It can be clearly seen that security inside large
data centres can be achieved easier with the help of IPsec and IPv6. Security, integrity
and confidentiality can be applied to all the data planes (management, control, data),
something that cloud not be accomplished with IPv4. Furthermore, IPsec is transparent
to the upper layers and thus is independent and does not influence in any way the
software that a customer may use inside the cloud.

31
5.2 Network Management benefits

IPv4 emerged as a standard protocol at a time when networks were neither something
common and nor world wide spread. Thus, IPv4 was, and still is, overcome by the large
size the internet has grown into. As a consequence, the way a native IPv4 data centre is
managed today is not very efficient due to its size. In a cloud computing environment,
IPv4 is even more outdated, due to the dynamic nature of the machines that exist on the
network. The pay as you go method employed by cloud computing providers, means
that network topology in a data centre is unpredictable and hard to manage and track.
However, IPv6 was developed exactly for this reason: to better cope with and integrate
large networks, in which management has to be very efficient, transparent and as
automated as possible. Even though cloud computing was not a concept yet developed
when the plan for IPv6 emerged, the tools it offers fit perfectly in the in the cloud
paradigm.
IPv6 provides multiple addresses per interface, link-local, global-unicast and
unique local, a different addressing scheme than on address per interface that exists in
IPv4. This offers different ways of host and network management. The stateless
configuration of IPv6 along with ND provides independent host management from a
centralized point.

5.2.1 IPv6 addressing and interface identifiers

One of the reasons for developing IPv6 was the address exhaustion from IPv4. Hence it
was decided that the new IP addresses should be 128 bit long compared to the 32 bit
IPv4 addresses. In this way it was assured that the number of possible addresses will be
enough to last for the foreseeable future and be used by all the internet capable devices.
However, the differences do not reduce only to the high number of available addresses
but it extends to the meaning of the address in the context of network and to the way of
generating an address.
Interface identifiers are used to uniquely define a host and can be generated in
many ways, from manual to automatic or stateless configuration. The method to
generate an IPv6 identifier by using the EUI-64 address format is defined in [44, p.
Appendix A]. This method will be using the IEEE 802 address, also know was MAC
(media access control address) to generate a global unique address. To EUI-64 address
format will be added another 64 bits. These will define the scope of the IPv6 address as
well as identify the subnet, company, ISP and registrant of that particular IP.
The scope of an IPv6 address is to define its reach ability and routing policies in
the local network and in the internet. Thus, address scopes can be defined as follows:

Interface-local: it represents loopback addresses.


Link-local: address that has functionality only on the same network link. They
cannot be router.

32
Unique local addresses it is an address that has significance only inside a certain
domain or organization, thus it cannot be routed outside of that specific network.
It is similar to IPv4 private addresses.
Global: addresses that are routable on the internet; public addresses.

The fact that in IPv6 an interface can have multiple scope addresses can be seen as
an increase in complexity, for both the data centres administrators as well as for the
clients. However, at a closer look, this new addressing scheme is benefic for the data
centres that host cloud services, as depicted in Figure 15. Unique local address can be
used for server management or even VM management. In this way, the traffic can be
easily divided between control and management and the other traffic. All the
management services that run on a physical server or a VM can be bound to one unique
local address, for better protection against outside attacks. Moreover, cloud service
providers can easily create a management network inside the data centre without
affecting usual traffic and with no access from the outside world. This would also
reduce the complexity of the network, in case the administrators prefer not to use
VLAN technologies.
In the case a cloud computing provider offers services for virtual server and
online storage separately, as Amazon EC2 does, the customer or the administrator
would have the option of allowing only certain IPs to access the storage server (or
virtual hard drive), which can be easily set and configured in servers like Samba. This
would, however not be possible with IPv4, primarily because inside the data centres, the
private range of IPv4 address is still used. Moreover, it is very likely that the addresses
will change constantly as they are assigned by a DHCP server. With distinctive local
addresses, private unique addresses can be bound to certain machine or servers (as it

Figure 15. Multiple addresses per interface and random generated addresses.

will be seen later, IPv6 offers efficient methods for anti-spoofing as well). This will
increase the security of the service that is deployed inside the cloud. Also, through a
unique identification of host, unauthorized accesses can be easily logged and traced

33
inside the data centre, consequently reducing the probability of attacks coming from
inside the network.
Through multiple IP addresses per interface, the IPv6-enabled cloud-host
unnecessary exposure to the outside internet can be reduced; a scenario is presented in
Figure 15. A machine that runs in cloud can be both server and client. In this case, the
host can have a global unique address for accepting an incoming connection, another
one for queries as a client and another address for use in the local domain environment.
For example a host will have a different address when responding to a DNS query than
when initiating a connection with a peer. For ongoing connections, privacy extensions
can be used, as they are defined in [45], for generating random addresses. This will
create an asymmetric method of addressing, with the advantage that a certain machine
can conceal its functions (server only or server/client). Furthermore, this will help when
talking about IPsec and the SA created between peers. Through this method of multiple
addresses per host and address per scope of client (server or client) a granular policy on
how IPsec can manage packets can be employed.

Figure 16. Anycast


Figure 16 presents a new concept in that IPv6 has brought: anycast. For better
management of data access and communication in IPv4 there were three types of
addresses created: unicast (one-to-one communication), broadcast (one-to-all
communication) and multicast (one-to-many communication). However, a new
addressing technique was introduced: anycast (one-to-one-of-many communication).
More to the point, through unicast one host or source can access the data/server/host
that is closest to its proximity, as presented in the above figure.
Anycast can be used to determine the closest geographical data centre of the
cloud computing provider. For example, in case a user travels from a country to another
or from a continent to another, through the usage of anycast addressing, the client can
access their data from the closed data centre. In conjunction with VM migration or any
other type of data distribution techniques, anycast address could provide the easiest and
the simplest way for traffic, data access and data withdrawal. Furthermore, anycast
addresses seem to be the best way to efficiently use bandwidth and processing power,

34
by using the closest available resources. Also, in the world of Content Distribution
Networks, networks which usually spread across multiple regions, anycast methods of
accessing data fit perfectly.

5.2.2 Stateless approach

The high number of IPv6 addresses and the incredible elastic nature of a cloud
computing data centre create the perfect environment for stateless configuration of host,
or in other words for the auto configuration possibilities without any persistent data
stored on the network.
Data centres that host cloud computing services distinguish themselves through
an arbitrary number of servers or VM that run at some point on the network
infrastructure. This means that the addressing of such a data centres would take a lot of
man power. Even if DHCP is involved, in some cases, the configurations and number of
servers needed would increase the complexity even more. The statement: The stateless
approach is used when a site is not particularly concerned with the exact addresses hosts
use, so long as they are unique and properly routable. [46] fits perfectly in the
paradigm of virtual infrastructures. Data centres and their cloud services should focus
on stateless configuration of their machines rather than on a centralized service.
Regardless of the truistic nature of the statement A complex entity will always
generate more problems than a simpler one, it is the ground stone for the world of data
centres and implicitly cloud computing. As presented above, some security measures
that are deployed can increase the complexity of the whole system (IPsec). Auto
configuration reduces the burden of administrator when considering addressing of hosts
in a network. Furthermore, in cloud computing environments, this is a benediction,
because it fits perfectly on the dynamic nature of the data centres, with hundreds, if not
thousands of VMs and server going online and offline at an incredible rate.
Compared with IPv4 auto configuration, IPv6 takes it a few steps further. The
machine is able to configure on its own, with all the information needed for both link
access as well as internet access. Some argue that a DHCP server will still be a good
idea, in order to provide the randomness of IPs, thus screening the host for possible
attacks. However, IPv6 auto configuration can implement all the benefits of using a
DHCP server. In [45] and [47] support the stateless approaches, by generating random
addresses. Hence, some of the benefits of DHCPs randomness of addresses can be
easily substituted by this kind of IPs. Moreover, the issue of privacy arisen with the use
of the MAC address in the interface identifier, [2, p. 282] [48], can yet again be easily
solved by the host itself, through the same random generated addresses.
Elasticity is a defining term for virtual infrastructures. It is very important that
new resources are available as soon as possible to be used by upper layer services. That
means the effectiveness and complexity of the addressing method is of utmost
importance. DHCP servers, which are nowadays mandatory in big networks, face a
challenge in being valuable. For example, DHCP request use broadcast messages to
contact DHCP servers. In the best of the cases, at the expense of more configurations
done to the routers, a DHCP server could attendant more networks, (they will have to

35
forward the hosts DHCP requests; DHCP relays). On the other hand, in the worst of the
cases, this would mean that every segment of a network would have a DHCP server.
Furthermore, even for basic connectivity, a host will still depend on the central server.
The fact that a new, additional machine has to be deployed on the network, to provide
even the basics connectivity, does not scale well with the need of fast available
resources in data centres.
Through stateless auto configuration hosts can configure themselves to the point
they are available on the network (link-local). This will provide a basic connectivity and
communication with the hosts on the same link. This means that as soon it is online the
host can communicate to some degree and most importantly it will not depend on any
other service for this. However, for broader communication address (site-local, global),
the host has to petition information form a device that is indispensable to any network,
hence it will be omnipresent in all networks: the router. The petition is done via
multicast addressing, to a device already on the network and not to an additional one,
like the DHCP server.
It can be concluded that stateless configuration the complexity of the whole
cloud computing data centre can be reduced. Furthermore, it supports the idea that by
simplifying something you can afford the complexity of other, possibly more important
systems.

5.2.3 Address validation

The development of cloud computing with its huge processing power and affordability
means that attackers have now more powerful tools than ever to accomplish their goals;
hence attacks coming from the cloud computing providers are increasing steadily.
Address spoofing has always been one of the means for hackers to hide their traces,
high-jack traffic and more. Thus, cloud providers face a great responsibility in trying to
stop such users to exploit their infrastructure as a platform for denial of service (DoS) or
distributed DoS. In addition, the importance for cloud infrastructure providers to protect
themselves against such attacks is even more imperative that in traditional
environments. This is because the attacks are more likely to come from inside the cloud
provider network and are focused on the information of the companies that use those
providers.
Address spoofing problem has been addressed many times and solutions have
been proposed. Although there are methods in use today for filtering incoming packets
and checking their source address as a mean of validation, these mainly apply at a
macro level, without a method of checking the validity of the packets at a more grained
level. Such examples are ingress filtering and unicast reverse path forwarding.
Moreover, these do not provide any protection against spoofing attacks from within the
network, attacks that are more likely to occur in a cloud data centre. DoS attacks can
cause massive drainage of computing power, which, translated in cloud terms, might
mean a heavy loss in availability and money.
IPv6 and all the protocols behind it that provide extra functionalities bring viable
solutions to the problem. By deploying full support for IPv6, cloud providers can easily

36
prevent spoofing attacks. Presented in [47], cryptographic generated addresses are
offered as a solution to verify that the sender uses the address assigned to it.
Furthermore they are themselves present as viable solutions for mobile environments
[49], a benefit that can be exploited in the case of new cloud related services, as well as
a benefit for cloud computing users, which can move from one network to another.
IPv6 implementation can prove to be optimum for verifying the authenticity of a
customer/client without deploying any key exchange infrastructure.
With IPv4, attackers can use ARP as a way to high jack traffic and redirected to
an unwanted host. As with IP addressing spoofing, ARP can be subject to the same kind
of attacks. However, IPv6 does not use ARP protocol, but instead the neigbor discovery
(ND) protocol takes all its tasks. ND can be as well subject to attacks, but, unlike ARP,
ND can be protected and can be used against spoofing attacks. In [50] is presented a
way of securing ND with the use of cryptographic generated addresses (Secure Neigbor
Discovery). This is a huge benefit compared to what IPv4 has to offer, because this
mechanism can be only applied to IPv6 networks and it is furthermore independent
from IPsec. It can be clearly seen that IPv6 has more security and contra-attack
measures options that can be used based on virtual environment.
When talking about cloud computing data centres, intranet security is quite
sensitive and has to have a great deal of concern for the security department. As stated
before, in a cloud provider data centre, the attacks from within the network are more
likely. Traditional source verification techniques that are now used in the IPv4 network
do not validate the source address at host level, but rather at network prefix level
(ingress filtering and uRPF). However in [51] is presented a solution based on IPv6 and
SEND that can validate the addresses of the host inside a network. Through this
method, data centres might be able to prevent the illegitimate use of addresses for
spoofing attacks. Additionally, this method will protect providers from legal issues,
such as liability form spoofing attacks and will discourage attackers to use cloud
resources for their attacks. As the world of cloud computing is growing and seems to
attract more services (gaming, video, blogging, etc.) accountability for user actions that
exploit the recourse and power of the cloud data centres has to be deployed and as
demonstrated, IPv6 being able to provide the best tools for this.
In [52] a technique presenting a method of validating the source at a greater
level, explicitly the source addresses can be validated as coming from the right host, or
originating from the AS that it indicates where it comes from. This comes as a source
validation solution, based solely on IPv6 that can be used in multi site data centres
which are spread across multiple continents, similar to Amazon EC2. The two methods
of source validation [51] [52], can be simultaneously implemented in data centres to
prevent spoofing attacks within and as well as between the providers of different data
centres.
In [48], a thesis is promoted that by stateless auto configuration of the IPv6 host
and the deterministic method of creating and IPv6 address, a host, and eventually a user
can be easily geotagged, traced and its data traffic analysed, in some cases even with
malicious intents. With CGA, the anonymity of the users is be secured. This is
particularly welcomed when sensitive data is stored in the cloud and certain information

37
about the host must be concealed, primarily the embedded MAC address from the
network identifier, for potential eavesdroppers. This will prevent a potential attacker to
determine the predominant OS a certain network has. However, this will not reject
methods for source validation
IPv6 and all its aspects of flexibility is a perfect fit for all purposes the flexible
virtual environment encompassed by cloud computing. It presents itself as a panacea
protocol that is not only simple, but also offers viable solutions and options to a lot of
problems that generate insecurity in the cloud computing businesses. Through IPv6,
multiple security measures can be implemented based on the need of the provider as
well as on the need of the user. Furthermore, when dealing with the new version of IP,
one has to see pass the potential problem, as presented in [48], and accept that on the
long run, its versatility offers solutions and means of improvement.

5.3 QoS benefits

The internet is a best effort network in which there is no discrimination between the
participants, packets, flows and users. All components are treated equally. A problem
arises in the modern networks, in which more and more traffic sensitive applications are
used and deployed around the world. Furthermore, as real time services between
common users are growing in popularity: video conferencing, online gaming, audio
services and financial information, it leads to a greater pressure on the networks to
further advance in traffic quality policy.
Quality of service (QoS) assures the services for providers to meet the
requirements for certain types of data flow or packets, as stated in prior service
agreements with the customers or based on their own needs. Service-level agreements
(SLA) can bring advantages to cloud computing companies that provide some sort of
real time software or services. Even though the implementation and inner workings of
QoS in both IPv4 and IPv6 are mostly the same, there are still small, fine differences
that can bring value to the companies that look to profit from them.
QoS services are split into two main architectures: integrated services or IntServ
and differentiated service or DiffServ. They provide two different modalities to
implement QoS elements. IntServ uses the idea of resource reservations. In other words,
before sending information to the destination, the source has to reserve the resource
needed for that particular data flow. Moreover, the receiver has to reserve its own
resources too, making this architecture unidirectional. Resource reservation will be
persistent while the data flow exists. In addition, when the communication ends, the
resources have to be explicitly released. To some extent, this approach resembles virtual
circuit networks.
The resource reservation in IntServ can be a problem in large virtual
environments and can compromise the network. Every router along the flow of the data
will have to maintain the state of the resources reserved. This tends to put a lot of strain
on the equipments and processing efficiency. In large virtual environments with
thousands of virtual machine instances, it can lead to inefficient use of infrastructure.
Additionally, the persistence of the virtual circuits is not well suited for the flexible,
38
dynamic natures of the cloud computing environments. However, the existence of flow
label field in IPv6 was designed bearing in mind such resource reservation techniques.
It can be used to better identify packets of the same flow, or in the context of IntServ, of
the same circuit.
DiffServ method relies on a more coarse-grained approach. Contrary to the
IntServ architecture, DiffServ has a per-hop-behaviour. That means that all equipments
along the route that will analyse IP packets, will be able to make their own decisions
based on some predefine policies. Data which will flow in the network that uses
DiffServ will be classified depending on the providers needs or based on certain SLAs.
For this to be accomplished, the bytes in the ToS field (IPv4) or Traffic Class (IPv6) are
used to differentiate between packets and how they will be treated. Table 1 shows the
different DSCP markings that are use for packet classification and behaviour. EF
(express forwarding) is used to mark packets that are delay sensitive, like audio data.
The rest represent different levels of importance, starting from tier 1, AF1, with its
lowest drop policy, AF11, to the less important tier AF4, with its lowest priority
sublevel AF43.

Per Hop Behaviour Low Drop Medium Drop High Drop


BE 0000 No special treatment (best effort delivery)
EF 101110 It is used for time critical applications
AF1 001010(AF11) 001100(AF12) 001110(AF13)
AF2 010010(AF21) 010100(AF22) 010110(AF23)
AF3 011010(AF31) 011100(AF32) 011110(AF33)
AF4 100010(AF41) 100100(AF42) 100110(AF43)

Table 1
IPv4 and IPv6 use these identifiers in the same way, so there is no difference in
the methods of how DiffServ is implemented for either IPv4 or IPv6 networks. It is
worth mentioning that QoS in cloud computing is more easily and more suitably
implemented using later architecture. Data centres can apply their own QoS polices,
independent from the outside world and they do not have to base them greatly with
respect to other external infrastructures, due to the fact that different DSCP are mapped
locally.
Problems can arise in both architectures, but the small improvements in IPv6 can
work out some of them. As mentioned earlier, the flow label field in IPv6 header was
created to improve such architectures. Papers such as [53] analysis the performance of
using flow label in QoS implementation and provide an alternate solution to the already
existing IPv4 only QoS. This reflects the flexibility of new IPv6 based QoS techniques.
Flow label can change the way QoS is deployed in data centres. In IPv4, a
packet is determined to belong to the same data stream, and hence same policies should
be applied to it, based on a 5-element tuple: source address, destination address,
protocol, source port and destination port. Because of flow label in IPv6, routers base
their QoS decisions and which policies to use on just a 3-element tuple: source address,
destination address, flow label. As it can be seen in picture 4, flow label is placed before
the source and destination addresses; all information about the packet is immediately
39
available regardless of higher headers or protocols. This is not possible with IPv4,
which has to analyse the TCP or UDP header for the source and destination ports.
Furthermore, IPv4 5-element tuple raises another problem: Quality of Service (QoS) is
available in IPv4 and it relies on the 8 bits of the IPv4 Type of Service (TOS) field and
the identification of the payload. IPv4 Type of Service (TOS) field has limited
functionality and payload identification (uses a TCP or UDP port) is not possible when
the IPv4 packet payload is encrypted. [54] This affirmation is also backed up in the
book Deploying IPv6 networks. [55, p. 179] This means in some cases, the cloud
providers using IPv4 cannot offers QoS services to users that want to uses encryption on
their traffic. On the other hand, with IPv6 and flow label, proper QoS is not lost due to
encryption that might happen with IPsec or at higher level.
The network management can take benefits by the QoS that IPv6 can offer. With
flow label, packets can be separately managed, even when security protocols are
deployed. Furthermore, the fact that control protocols can be secured with encryption
(NDP; Mobile IPv6 updates (binding updates, etc.)), raised the problem of proper flow
based QoS. In a fully dynamic environment as the one in cloud computing data centres,
troubleshooting, error reporting, management, configuration, etc. have to have priority,
so that the resources will be rapidly and highly available. Management traffic can be
treated as of critical importance traffic even when IPsec has been deployed for
encryption the traffic.
IPv4 allows fragmentation of packets along the route from source to destination.
The most important part of fragmentation is MTU or maximum transmission unit. It
refers to the maximum length that a packet can have when sent across the network form
the source to destination and it is usually 1500 bytes. In the case the packet exceeds the
MTU of the link, routers along the path will fragment it. This creates two problems.
Firstly, by fragmentation, the router processing power is used for splitting the packets
into smaller ones, with respect to the MTU. Secondly, the overhead of the packets is
increased, because of the fragmentation flag and fragment offset field have to be set and
calculated. The effect of these problems is usually increased delay and, in the case of
heavy traffic, it can create congestion due to packet processing time. As a result QoS
has to suffer in case of packets exceeding the MTU; excess MTU can break QoS.
IPv6 does not allow the intermediate fragmentation of the packets. To support
this requirement, the minimum MTU of IPv6 packets has increased to 1280 bytes, as
well as the need for any node to use path MTU to discover the maximum size of a
packet between itself and the destination. As a consequence, IPv6 can provide better
and faster data delivery and suites better the modern networks, in which there is an
increase of traffic composed out of time sensitive data or file transfer. Furthermore, in
case the fragmentation is still needed, it is done only end-to-end and does not put any
pressure on the intermediate network that lay between the source and destination. The
fragmentation header is used, so the routers along the path will not have to analyse or
process this header.
In [56], data centre design requirements have been modelled in order to cut the
problems that occur in data centres and impose more traffic engineering techniques both
in data centre and the outside networks. These are:

40
Multipath routing.
Coordinated scheduling using a global view of traffic.
Exploiting short-term predictability for adaptation.

Correlating the data centre requirements stated above with the following quote:
We find that existing techniques perform sub-optimally due to the failure to utilize
multipath diversity, due to the failure to adapt to instantaneous load, or due to the failure
to use a global view to make routing decisions. [56]. It seems that IP related problems
like load balancing and multipath traffic engineering represent a big dilemma in data
centres and consequently to cloud computing.
IPv4 provides some solutions to these problems, particularly to the multihoming
technique; the technique through which one customer has two or more connection to
ISPs for redundancy and load balancing. Even though IPv4 offers solution to
multihoming [57], there is more effort put into bringing all the possible solutions and
advantages that can IPv6 bring to these problems, though multi6 IETF working group.
RFC 6438 [58] presents a solution that uses the unique feature of IPv6, the flow label,
for a technique that will allow multipath routing in the tunnelled environments. As a lot
of companies might opt to use to tunnel their traffic, through IPsec or other protocols,
this solution would provide to be optimum for a data centres to uniformly distribute
their outgoing traffic between more providers.
In the long run, the new IP will provide new benefits to the modern data centres
and to future cloud computing services. As seen in [58], the new features that exist in
IPv6 are already used for improving traffic engineering and QoS methods that can be
applied to benefit the data centres and the cloud computing services.

5.4 Performance benefits

Performances along with resource availability are two characteristics that define the
quality of one cloud service; they are the corner stone that can destroy or help develop
an online service provider. As a consequence, these two features must have priority in
their importance. IPv6 can improve both of these, adding value to the service that the
provider is offering.
Anycast can improve resource availability and latency as well as provide
location aware rerouting. Broadcasting, as it will be presented, is of extreme sensitivity
when situated in the context of data centres. Fortunately, IPv6 has the means through its
design and collateral protocols (ND, ICMPv6) to offer pertinent solutions that do not
imply a change in the cloud computing paradigm.

5.4.1 Load balancing

A previous chapter underlined the benefits of opting out NAT equipments form the
network. However, the benefits of this action apply to a client/server connection, as, for
example, between a client and its VM or virtual server. In the case of a PaaS or SaaS

41
infrastructure, a one-to-one connection between the client and a peer (VM, physical
server, VPS) service is unlikely. Load balancing is, however, of much greater
importance when talking of PaaS and SaaS. Clients request, information and
information access of the services is split among a large number of machines. In this
way optimum performance is offered to the end consumer. For this reason an efficient
method of distributing the load among these machines is imperative.
IPv6 Anycast proves to be a feasible solution for improving the load balancing
and failover mechanisms. Figure 16 offers a view of what an anycast address is: severs
with the same IP address are located at different geographical locations; when a client
request services from these servers, the ones that are closest to the client will answer.
The advantage of this addressing scheme is that it offers the possibility of failover
mechanisms without the need for any heavy reconfiguration in the eventuality of data
centres going offline.
With IPv6 anycast, optimum solutions can be more easily achieved in case of
decreased complexity and availability in cloud computing data centres. As presented in
the paper Anycast as a Load Balancing feature [59], it is shown how anycast can
provide the means of failover mechanisms between the load balancers of the same
company. Furthermore, as it was concluded in the paper, anycast is a solution for fast
network recovery in case of data centres failure. Also, it offers the means of reducing
complexity and administration costs.
Anycast is a stateless mechanism. As a result, anycast cannot assure that the
datagrams of the same data flow will always be delivered to the same location. This
means that in some case, TCP session cannot be sustained or kept alive. This means
that, by itself, anycast addressing scheme can be used for UDP transmissions. DNS
services, as a protocol that uses mainly UDP, can be created to be aware of anycast
addressing. This created a support for geo-aware DNS, which can provide low latency
and optimum resource allocation to customers. As an example, Amazon already
provides this method in its Router 53 DNS service. [60] However, at this time this is
supported only over IPv4.
The modular nature of IPv6 can provide improvements for anycast as well. This
signifies that based on certain needs, methods can be created to provide the services
required for cloud computing providers. In the article IPv6 Anycast Routing aware of a
Service Flow [61], a solution is presented based on functionalities of neighbour
discovery protocol; the state of a certain traffic flow can be sustained and redirected to
the same server, every time an anycast service is used. Furthermore, it has to be pointed
out that such solutions are transparent and independent from upper layer application.
This creates the perfect environment for cloud computing software and application since
they do not have to be modified to support new IPv6 functionalities.
In the short introduction of load balancing [24]a method called Direct Routing is
presented as being one of the most efficient ones. It implies the use of a load balancer
that changes the destination MAC address on the fly such that every time a different
server answers the request. Furthermore, each server behind the load balancer will use
the same virtual IP address to answer to the requests. However, this method, as pointed
out in the article, presents a difficulty in deployment: the servers behind the load
42
balancer do not need to answer the ARP request as this will destroy the functionality of
this load balancing method (every MAC will be mapped to a different IP, and not to a
Virtual IP). As it was already mentioned, IPv6 does not need the use ARP, as it was
replaced by a new protocol, neigbor discovery. Therefore, it can be possible to avoid the
problem of ARP response and create a more efficient way for the load balancer to create
a poll of MAC addresses of the servers behind it through ND mechanisms.

5.4.2 Broadcast efficiency

Broadcast is a legacy method presented in Ethernet communications which is


characterized by the flooding of the network with layer 2 datagrams. A broadcast
domain is represented by a LAN segment through which broadcast datagrams can be
propagated to all host residing on the particular LAN. ARP and DHCP base their
services on this method of host access. In small scale networks in which all broadcast
domains have a small or moderate number of hosts, flooding the network with traffic in
order to find the needed destination might not be problematic. However, in data centres
broadcasting traffic might prove to be wasteful and dynamic enough.

Figure 17. Structure of a data centre and exponential number of machines


When traffic flooding occurs, it will be forwarded all around the network. As
presented in the Figure 17, if broadcast occurs all the servers in the data centre cluster
will hear it; all links will be affected by the traffic. The switch to which all the servers
are connected will forward the datagrams to all neighbouring host. The ratio of VM per
server, as stated in [15, 16] can reach quite high numbers, 12:1 or 15:1 with high
likelihood that this ratio will increase even more in the near future. This is portrayed by
Figure 17 in which every rack can hold a high number of servers which in turn have a
large number of VM. [17] This leads to the idea that on only one link, high numbers of
hosts will reside (15+). Furthermore, in a presentation by David Hadas of IBM Haifa
Research Lab [6], it is stated that the number of physical and virtual network interfaces
that might exists in a large data centre can reach up to 100,000,000. In the paper The
Cost of a Cloud [25], it is stated that a layer 2 domain can support up to 4,000 hosts.
Moreover, since ARP is required between each source and destination pair, its

43
broadcasting overhead increases in proportion to the number of communicating host
pairs. [62] As a result, in data centres, the overhead of broadcasting and broadcasting
based services, as ARP, DHCP, NetBIOS, etc. means the decrease in bandwidth
efficiency.
However, due to the limitation of Ethernet broadcast the L2 domain is usually
split to support up to a few hundred hosts, through VLAN technologies, although this
leads to extra configuration and inefficient use of IP addresses. [62, p. 1] Even when
logically dividing the broadcast domain for better optimization, Ethernet still
experiences scalability problems. Consequently, no broadcast flooding is desirable in
data centres, or at least the reduction of such traffic must be crucial. IPv6, as presented
many times, renounces the service of ARP and in some case the services of DHCP. IPv6
uses multicast for L2 address discovery. This method reduces the amount of unsolicited
traffic to be propagated through the network.
ARP or address resolution protocol is used for mapping MAC addresses to IP
addresses. When a host wants to send messages to a specific destination, it requires the
destination MAC and destination IP addresses. Through different means, the destination
IP address is determined or already known. In case the source does not have the
destination MAC address in cache, it has to detect it. This is done through flooding the
network with a packet which contains the destination IP address, that is already known
and the broadcast MAC address: FF:FF:FF:FF:FF:FF. This message reaches all hosts on
the LAN segment, but only the host with the specified destination IP address responds.
In a cloud computing data centre these requests might happen more frequently than in a
common network. One of the reasons is the dynamic nature of the network topology, in
which host/servers/client go on and offline in a very fast manner, based in the need of
the upper layer software and services provided to the customers.
ND takes the place of ARP. Instead of broadcast, it uses multicasting for address
resolution. When performing address resolution, the source sends the request with the
solicited-node multicast address IP destination and multicast MAC address as
destination. The solicited-node multicast address is formed by adding the prefix
ff02::1:ff/104 to the 24 least significant bits of the link-local unicast address, resulting
in and address of the from: ff02::1:ffXX:XXXX. The destination MAC address is
similarly created, by adding the multicast prefix 33:33:FF in front of the 24 least
significant bits of the same link-local unicast IPv6 address, resulting in a destination
MAC address of 33:33:FF:XX:XX:XX. As the destination host already joins the
multicast group determined by ff02::1:ffXX:XXXX and 33:33:FF:XX:XX:XX, it is, the
only host that will receive the address resolution request. Thus, no other host on the
LAN is disturbed with unnecessary requests. In some cases, due to address conflict,
more than one host can receive the message. However, the number of hosts that will be
affected is small. Furthermore, for optimum and true no-broadcast environment, the
switches that exist in the network must support MLD snooping, for proper and true
multicasting.
As can be seen from the comparisons of ARP and ND and how broadcasting is
avoided with IPv6, it can be concluded that by deploying IPv6 in a data centre, the goal
of reducing flooding traffic might be achieved. With the high number of servers and

44
VM that will function simultaneously in cloud computing data centres, correlated with
the low flexibility and performance of broadcast services, as well as the imposing need
of reducing broadcasting, IPv6, with its ND protocol can provide an optimum future
solution.

5.5 Mobility benefits

Mobile internet services are becoming very affordable for the common customer. This
indicates that cloud computing services will be more and more accessed from mobile
devices that roam between different networks. They will present the focal point of the
mobile internet: The killer application for cloud computing is mobile computing:
equally, the killer app for mobile computing is the cloud. [...] Without cloud computing
services, mobile devices offer limited capabilities. [63] Moreover, as future companies
and businesses move to the cloud, the cloud will take the role of home data centre. As
a result, both mobile internet and cloud services providers should be interested in
deploying the most efficient solution for providing mobility.

Figure 18. Mobile IPv4


The idea of mobile IP is to provide a solution to the connectivity problem when one
host moves from an initial, home network to a foreign one. As the IP address changes as
a result of this migration, all communication sessions that were initiated in the home
network will break down. Therefore, mobile IP for IPv4 was created, its basic
communication mechanisms are presented in Figure 18 along with its two most
common scenarios.

45
MIPv4 poses some rather serious limitations that, in some cases, make it
difficult, if not impossible, to deploy and use. Firstly, for mobile IP to work with IPv4, a
foreign agent must exist in the foreign network. This is needed to manage all the
communication between the MN (mobile node) and the HA (home agent) as well as
determining that MN is on a foreign network. However, plenty of networks all over the
world do not deploy such a device, meaning that, in some cases, mobile IP cannot be
used at all. Secondly, MIP uses triangular routing, scenario number 1 presented in
Figure 18; data from a certain node to the mobile node is routed through a home agent,
while returning traffic is direct. However, because of the limitation of IPv4, this
technique might be problematic. The MN uses its home address as source address. This
can pose problems because it might be seen as spoofing attacks. This can be averted, by
tunnelling: all traffic is routed through the HA and FA and encapsulated in IP-IP or
other protocols.
MIPv6 is not an addition to IPv6, as is the case with MIPv4. It is an integrated
functionality. This means that all IPv6 capable devices on the internet can, theoretically,
use all the functionalities of the mobile IP. Moreover, in the next generation mobile
protocol, no FA (foreign agent) is used. This results in MIP being a real mobile
solution. Devices and users will depend only on the policies of the companies that will
want to implement such a feature; they will still have to deploy a HA.
In most cases, due to source addresses verification, in MIPv4, tunnelling all
traffic between FA and HA will be the choice. The latency and the overhead of the
tunnelling protocol can be fatal to real time communication. The time between an IPv4
MNs arrival in a foreign network and the moment it can start receiving packets can be
quite high, due to all the setup messages that have to be exchanged. As well, the
encapsulation of the original packet into another IP packet, (IP-IP, IPsec, etc.) increases
the overhead.
In MIPv6 all traffic between MN and CA is forwarded by default through the
home agent. However, unlike mobile IPv4, the modular nature of IPv6 is used; a new
mobility header is defined, that handles all the necessary information between the
mobile node and its home agent. The traffic flow is more efficient and the traffic can be
better handled. Also, MIPv6 proposes a technique that will help avert the HA in case of
congestion or just for better, direct connection between MN and CN: route optimization.
Through this method, direct, bidirectional communication can be achieved between the
two nodes. Moreover, to reduce latency and improve reliability for real time
applications, MIPv6 improves the handovers of connections from the home network to
the new one. [64]
As stated in a previous chapter, virtual machine migration is one of the main
problems that are present in virtual data centres around the globe. By moving powered
on VM machine across a network, data centres can improve their scalability and
resource efficiency and availability. As a physical server gets overwhelmed, a solution
to keep providing services to the customers is moving the VMs to machines that are
either lightly used or newly attached on the network.
One of the most used methods for VM relocation is live migration. It provides
the means to move a VM across the same link. Before a virtual machine is moved all its
46
information (memory, storage, and network connectivity) are first copied to the
destination. However, this method has a rather low coverage, as the machine can be
moved only on the same LAN segment. Also, it has applicability with both IPv4 and
IPv6. Nevertheless, because of the addressing structure of IPv6, the basic connectivity
of an IPv6 host is achieved very quickly, with the use of the link-local address. So, in
the case that new servers are attached to the network, they will become available very
fast for the potential migration of VMs. Methods have been proposed, related to MIPv6
that take the live migration technique at a global scale, such as presented is the paper:
A Performance Improvement Method for the Global Live Migration of Virtual
Machine with IP Mobility [27]
As previously presented, anycast can provide geo-location mechanisms.
Therefore, with IPv6 anycast along with the MIPv6 and global live migration, new
techniques can be development such that VMs can be moved automatically to a data
centre closer to the user. This will provide beneficial resource optimization, for both
data centres, that will be able to optimize their bandwidth and resource use, and
customers, who will benefit from high internet speed and lower delays. Basic
functionality is presented in Figure 19.

Figure 19. MIPv6 and VM migration


The initial communication between the user and a cloud computing based VMs
is done in a direct fashion, as presented. When the MN moves its location, it announces
that to the HA, which binds the care-of-address to the home address. Now, the data
flows from the MN to the HA and VM. For location determination, a route optimization
has to be started in order for the virtual machine to determine the true address of the
mobile node. Based on the address, the VM takes the decision to migrate or not. In case
it determines that moving to a new data centre is optimal, the VM can apply live

47
migration procedures, as presented in [27] or [65]. While the machine is moved to a
new location, the traffic flows through the original path. After the migrating has
finished, the temporary traffic is routed through the data centre HA, clients HA and
MN. Again, a route optimization procedure takes place, which will connect the mobile
node and the VM directly.
Based on the general and most obvious drawback presented above, mobile IPv4
proves to be at best a transition mechanism. Due to the need for an FA on the foreign
network, it is unlikely that MIPv4 will be deployed on a large scale. In addition to
address depletion, MIPv4 will not be able to cope with the increasing number of mobile
devices that will access the internet. Moreover, as the cloud computing world is
increasing in both importance and communication sensitive applications that might be
offered by cloud computing providers, the need for continuous-sessions communication
is paramount. MIPv6 presents itself as the perfect answer. Furthermore, as internet
services will be available on fast moving means of transportation (cars, trains,
airplanes), and as more and more mobile terminals (iOS, Android, Windows Phone 7,
etc.) are getting online every day, session control and localization are important for true,
reliable and interactive services.

48
6 Conclusions

As cloud computing is increasing in popularity among common user, businesses and


even governmental agencies, the importance of better performance, security and
reliability is greater than ever before. Data stored and accessed from the cloud has to be
properly secured and available at any time, such that clients confidence in such services
could break the insecurities that are nowadays wide spread towards security, bad
handling of sensitive data, etc.
IPv4 has long been and still is the dominant protocol in the internet. As a
consequence it is also deployed and used on a large scale in data centres all over the
world. However, IPv4 is an outdated protocol that is not suitable to cope with the large
network topologies presented nowadays in the internet. The workarounds developed
along the years to preserve the address pool and to give new additional functionalities,
have created from IPv4 a cumbersome, sometimes hard-to-manage protocol. The future
importance of virtualization and virtual environments along with the possible division
of ISPs between traditional ISPs and virtual ISPs will put an even bigger pressure on
how IP addressing behaves and performs in data centres.
IPv6 is the next protocol that has some obvious, first hand advantages over its
predecessor. It was created from the idea of a new protocol that can handle a huge
number of hosts, in complicated network topologies. It was created as being a protocol
that is self depended, through autoconfiguration. All the inefficient mechanisms, such as
ARP and broadcast have been removed and replaced with others ones. While the
obvious advantages, like the huge address pool, provide improvement and benefits to
the data centres, other advantages are a direct consequence of these first hand
improvements.
The possibility to avoid NAT, gives the option for cloud computing providers to
offer proper end-to-end security to the customers that really need or want such features.
IPsec, as an integrated feature of IPv6, can provide a native solution for security
problems. Traffic can be better filtered at network level, offering transparent and
independent security from the upper layer applications. The structure of IPv6 addressing
scheme gives the possibility for better optimization and management of traffic that
flows inside and between the internet and data centres. As the number of potential
unique addresses is big, new solution can be given to source address validation
techniques, which were not able to be applied or were unpractical with IPv4. QoS and
the new flow label in the header, together with no fragmentation along the route of the
packets, created a better environment for real-time application and traffic. Furthermore,
the fact that cloud computing services are usually deployed on a global scale,
localization becomes an important feature.
Anycast provides a native method and less unwieldy techniques for localization.
Traffic can be better forwarded towards the data centres that are the closest to the user,
thus, offering the best possible network performance, in respect to bandwidth and

49
latency. Load balancing can take advantages of the anycast addresses of IPv6 in the
same manners as localization. Moreover, IPv6 offers a more realistic method for its
version of Mobile IP. Studies were and are still carried to optimize and model MIPv6 to
offer the means for data centres to adapt to changing resource requirements. VMs can
be moved as needed, between physical server and between data centres, providing a
new step in resource scalability and availability. MIPv6 brings the resources in closer
proximity to the users.
IPv6 is a scalable and modular protocol. As it was seen plenty of times, it offers
the advantages of being flexible in adding new features and options based on data
centres requirements. This is done with little to no intervention in the structure of the
internet or applications that use the network infrastructures. The advantages that it
brings over IPv4 open a new spectrum of possibilities.
With IPv6, cloud computing could really benefit from the improvements that the
next generation IP brings. IPv6 will create a friendlier environment in which cloud
based services will develop, probably, at a faster scale. Additionally, putting the very
well known advantages of IPv6 in the context of data centres and cloud computing, has
the benefit of creating a more clear idea of the options that will arise once IPv6 is fully
deployed in the data centres.
IPv6 was not developed with the cloud computing paradigm in mind. However,
it was shown that it presents itself as a much better protocol, whose improvements seem
to fit perfectly on the problems that data centres face or might face in the future. Even
though, the full deployment of a native IPv6 infrastructure might be regarded as rather
expensive, both by network administrators and managers, the benefits outweigh the
costs. Less complicated infrastructures with more efficient communication and partially
autoconfigured host are a good trade-off. Getting the advantages of IPv6 over IPv4, the
obvious and less obvious ones and putting them side by side with the cloud computing
world, has the potential of portraying a better picture of the importance that IP next
generation has in the development of cloud computing.

50
7 Bibliography

[1] J. Turner and D. Taylor, Diversifying the internet, Proceedings of IEEE


GLOBECOM, 2005, pp. 755--760.
[2] P. Loshin, IPv6 Theory, Protocol and Practice, 2nd ed., Elsevier, 2004, pp. 14-15.
[3] S. Kent, IP Authentication Header, RFC 4302, IETF, 2005, p. 34.
[4] S. Deering and R. Hinden, Internet Protocol, Version 6 (IPv6) Specification, RFC
2460, IETF, 1998, p. 39.
[5] K. Nichols, S. Blake, F. Baker and D. Black, Definition of the Differentiated
Services Field (DS Field) in the IPv4 and IPv6 Headers, RFC 2474, IETF,
1998, p. 19.
[6] D. Hadas, Network Abstraction The Network Hypervisor, Haifa Research Lab,
November 2010. [Online]. Available:
http://www.research.ibm.com/haifa/Workshops/cloud2010/present/David_Ha
das_AbstractedNetworks.pdf. [Accessed 21 November 2011].
[7] A. Myers, T. E. Ng and H. Zhang, Rethinking the Service Model: Scaling
Ethernet to a Million Nodes, 2004.
[8] N. Feamster, L. Gao and J. Rexford, How to Lease the Internet in Your Spare
Time, ACM SIGCOMM Computer Communication Review, vol. 37, pp. 61-
-64, 2007.
[9] P. Ganore, Virtualization A Little History, ESDS , 11 November 2011.
[Online]. Available: http://www.esds.co.in/blog/virtualization-a-little-history/.
[Accessed 18 November 2011].
[10] J. De Clercq, D. Ooms, M. Carugi and F. Le Faucheur, BGP-MPLS IP Virtual
Private Network (VPN) Extension for IPv6 VPN, RFC 4659, IETF, 2006, p.
48.
[11] T. Anderson, L. Peterson, S. Shenker and J. Turner, Overcoming the Internet
impasse through virtualization, Computer, vol. 38, pp. 34 - 41, april 2005.
[12] VMware, Performance Comparison of Virtual Network Devices, 2008. [Online].
Available:
http://www.vmware.com/files/pdf/perf_comparison_virtual_network_devices
_wp.pdf. [Accessed 22 November 2011].
[13] Oracle Corporation, Oracle VM VirtualBox User Manual, 2011. [Online].
Available: http://download.virtualbox.org/virtualbox/UserManual.pdf.
[Accessed 22 November 2011].
[14] WMware, Virtual Network Components, WMware, [Online]. Available:
http://www.vmware.com/technical-resources/virtual-networking/networking-
basics.html. [Accessed 22 November 2011].
[15] WMware, Increase Energy Efficiency with Virtualization, WMware, [Online].
Available: http://www.vmware.com/solutions/green-it/. [Accessed 23
November 2011].
[16] K. Marko, Server Virtualization Panacea Or Over-Hyped Technology?, Tech &
Trends, vol. 29, no. 6, p. 24, 9 February 2007 .

51
[17] S. Rupley, Eyeing the Cloud, VMware Looks to Double Down On Virtualization
Efficiency, Gigaom, 27 January 2010. [Online]. Available:
http://gigaom.com/2010/01/27/eyeing-the-cloud-vmware-looks-to-double-
down-on-virtualization-efficiency/. [Accessed 23 November 2011].
[18] H. Shah, A. Ghanwani and N. Bitar, ARP Broadcast Reduction for Large Data
Centers, Proposed Standard Internet Draft, IETF, 2011, p. 9.
[19] C. Thacker, Rethinking data centers, 2007. [Online]. Available:
http://netseminar.stanford.edu/seminars/10_25_07.ppt. [Accessed 24
November 2011].
[20] W. John and S. Tafvelin, Analysis of internet backbone traffic and header
anomalies observed, Proceedings of the 7th ACM SIGCOMM conference on
Internet measurement, pp. 111--116, 2007.
[21] R. K. Murugesan, S. Ramadass and R. Budiarto, Improving the performance of
IPv6 packet transmission over LAN, IEEE Symposium on Industrial
Electronics Applications, vol. 1, pp. 182 -187, October 2009.
[22] D. Simmons, IPv6 protocol overhead, Caffeinated Bitstream, 11 November 2010.
[Online]. Available: http://cafbit.com/entry/ipv6_protocol_overhead.
[Accessed 24 November 2011].
[23] E. Ertekin, C. Christou and B. A. Hamilton, IPv6 Header Compression, June
2004. [Online]. Available:
http://www.6journal.org/archive/00000068/01/Emre_Ertekin_Christos_Christ
ou.pdf. [Accessed 24 November 2011].
[24] Loadbalancer.org , Loadbalancer.org supported load balancing methods,
Loadbalancer.org , [Online]. Available:
http://loadbalancer.org/load_balancing_methods.php. [Accessed 24
November 2011].
[25] A. Greenberg, J. Hamilton, D. A. Maltz and P. Patel, The cost of a cloud: research
problems in data center networks, SIGCOMM Comput. Commun. Rev., vol.
39, no. 1, pp. 68--73, December 2008.
[26] E. Harney, S. Goasguen, J. Martin, M. Murphy and M. Westall, The Efficacy of
Live Virtual Machine Migrations Over the Internet, Second International
Workshop on Virtualization Technology in Distributed Computing, pp. 1 -7,
November 2007.
[27] H. Watanabe, T. Ohigashi, T. Kondo, K. Nishimura and R. Aibara, A
Performance Improvement Method for the Global Live Migration, Proc. the
Fifth International Conference on Mobile Computing and Ubiquitous
Networking (ICMU 2010), April 2010.
[28] R. Bradford, E. Kotsovinos, A. Feldmann and H. Schiberg, Live wide-area
migration of virtual machines including local persistent state, Proceedings of
the 3rd international conference on Virtual execution environments, pp. 169--
179, 2007.
[29] Trend Micro, Cloud Insecurities: 43 Percent of Enterprises Surveyed Have had
Security Issues With Their Cloud Service Providers, 6 June 2011. [Online].
Available:
http://apac.trendmicro.com/apac/about/news/pr/article/20110926071604.html.
[Accessed 10 March 2012].
[30] S. Bugiel, S. Nrnberger, T. Pppelmann, A.-R. Sadeghi and T. Schneider,

52
AmazonIA: when elasticity snaps back, Proceedings of the 18th ACM
conference on Computer and communications security, pp. 389--400, 2011.
[31] M. Chung and J. Hermans, From Hype to Future, KPMG, Amsterdam, 2010.
[32] V. Kundra, Federal Cloud Computing Strategy, The White House, Washington,
2011.
[33] L. Badger, D. Bernstein, R. Bohn, F. de Vaulx, M. Hogan, J. Mao, J. Messina, K.
Mills, A. Sokol, J. Tong, F. Whiteside and D. Leaf, High-Priority
Requirements to Further USG Agency Cloud Computing Adoption, vol. 1,
Gaithersburg: National Institute of Standards and Technology, 2011.
[34] C. Barnatt, A Brief Guide to Cloud Computing, London: Constable & Robinson
Ltd., 2010.
[35] Amazon, Feature Guide: Amazon EC2 Elastic IP Addresses, Amazon, 31 July
2010. [Online]. Available: http://aws.amazon.com/articles/1346. [Accessed
12 march 2012].
[36] C. Kaufman, Internet Key Exchange (IKEv2) Protocol, RFC 4306, IETF, 2005, p.
100.
[37] T. Kivinen, B. Swander, A. Huttunen and V. Volpe, Negotiation of NAT-
Traversal in the IKE, RFC 3947, IETF, 2005, p. 17.
[38] S. Kent and K. Seo, Security Architecture for the Internet Protocol, RFC 4301,
IETF, 2005, p. 101.
[39] IBM, Authentication Header, IBM, [Online]. Available:
http://publib.boulder.ibm.com/infocenter/iseries/v5r3/index.jsp?topic=%2Frza
ja%2Frzajaahheader.htm. [Accessed 20 March 2012].
[40] A. Shacham, B. Monsour, R. Pereira and M. Thomas, IP Payload Compression
Protocol (IPComp), RFC 3173, IETF, 2001, p. 13.
[41] Oracle, Planning an IPv6 Addressing Scheme (Overview), Oracle, [Online].
Available: http://docs.oracle.com/cd/E19082-01/819-3000/ipv6-overview-
7/index.html. [Accessed 21 March 2012].
[42] N. Ziring, Router Security Guidance Activity (SNAC), National Security Agency,
Maryland, 2006.
[43] S. Szigeti and P. Risztics, Will IPv6 bring better security?, Proceedings in
Euromicro Conference, pp. 532 - 537, August 2004.
[44] R. Hinden and S. Deering, IP Version 6 Addressing Architecture, RFC 4291,
IETF, 2006, p. 25.
[45] T. Narten, R. Draves and S. Krishnan, Privacy Extensions for Stateless Address
Autoconfiguration in IPv6, RFC 4941, IETF, 2007, p. 23.
[46] S. Thomson and T. Narten, IPv6 Stateless Address Autoconfiguration, RFC 2462
, IETF, 1998, p. 25.
[47] T. Aura, Cryptographically Generated Addresses (CGA), RFC 3972, IETF, 2005,
p. 22.
[48] S. Groat, M. Dunlop, R. Marchany and J. Tront, IPv6: Nowhere to Run, Nowhere
to Hide, International Conference on System Sciences (HICSS), pp. 1 -10,
January 2011.
[49] S. Qadir and M. U. Siddiqi, Cryptographically Generated Addresses (CGAs): A
survey and an analysis of performance for use in mobile environment,
IJCSNS International Journal of Computer Science and Network Security,

53
vol. 11, no. 2, pp. 24-31, 2011.
[50] J. Arkko, J. Kempf, B. Zill and P. Nikander, SEcure Neighbor Discovery
(SEND), RFC 3971 , IETF, 2005, p. 56.
[51] A. Kukec, M. Bagnulo and M. Mikuc, SEND-based source address validation for
IPv6, International Conference on Telecommunications, pp. 199 -204, June
2009.
[52] P. Tan, Y. Chen, H. Jia and J. Mao, A hierarchical source address validation
technique based on cryptographically generated address, Computer Science
and Automation Engineering , vol. 2, pp. 33 -37, 2011.
[53] E. Ahmed, M. Aazam and A. Qayyum, Comparison of various IPv6 flow label
formats for end-to-end QoS provisioning, IEEE 13th International Multitopic
Conference, pp. 1 -5, December 2009.
[54] omnisecu.com, Limitations of IPv4, omnisecu.com, 2008. [Online]. Available:
http://www.omnisecu.com/tcpip/ipv6/limitations-of-ipv4.htm. [Accessed 28
April 2012].
[55] C. Popoviciu, E. Levy-Abegnoli and P. Grossetete, Deploying IPv6 Networks,
Indianapolis: Cisco Pres, 2006.
[56] T. Benson, A. An, A. Akella and M. Zhang, The Case for Fine-Grained Traffic
Engineering in Data Centers, Proceedings of the 2010 internet network
management conference on Research on enterprise networking, pp. 2-2, 2010.
[57] J. Abley, K. Lindqvist, E. Davies, B. Black and V. Gill, IPv4 Multihoming
Practices and Limitations, RFC 4116, IETF, 2005, p. 13.
[58] B. Carpenter and S. Amante, Using the IPv6 Flow Label for Equal Cost Multipath
Routing and Link Aggregation in Tunnels, Internet-Draft, IETF, 2011, p. 10.
[59] F. Weiden and P. Frost, Anycast as a load balancing feature, Proceedings of the
24th international conference on Large installation system administration, pp.
1--6, 2010.
[60] Amazon Web Services, Amazon Route 53, Amazon Web Services, [Online].
Available: http://aws.amazon.com/route53/. [Accessed 13 May 2012].
[61] Y.-H. Kang and B.-G. Jung, IPv6 Anycast Routing aware of a Service Flow,
IEEE International Symposium on Consumer Electronics, pp. 1 -4, June 2007.
[62] C. Kim and J. Rexford,Revisiting Ethernet: Plug-and-play made scalable and
efficient, Proceedings of the 2007 15th IEEE Workshop on Local and
Metropolitan Area Networks, pp. 163-168, 2007.
[63] Vodafone, Cloud computing and enterprise mobility, Vodafone, 24 January 2011.
[Online]. Available: http://enterprise.vodafone.com/insight_news/2011-01-
24-cloud-computing-and-enterprise-mobility.jsp. [Accessed 15 May 2012].
[64] R. Koodli, Mobile IPv6 Fast Handovers, RFC 5568, IETF, 2009, p. 51.
[65] J. Chen and X. Chen, Mobile IPv4/IPv6 virtual machine migration transition
framework for cloud computing, Journal of Convergence Information
Technology, vol. 7, no. 3, pp. 226-232, 2012.

54

Das könnte Ihnen auch gefallen