You are on page 1of 66

Socially-aware Management of

New Overlay Application Traffic with
Energy Efficiency in the Internet
European Seventh Framework Project FP7-2012-ICT- 317846-STREP

Deliverable D4.2
Experiments Definition and Set-up

The SmartenIT Consortium
University of Zürich, UZH, Switzerland
Athens University of Economics and Business - Research Center, AUEB-RC, Greece
Julius-Maximilians Universität Würzburg, UniWue, Germany
Technische Universität Darmstadt, TUD, Germany
Akademia Gorniczo-Hutnicza im. Stanislawa Staszica W Krakowie, AGH, Poland
Intracom S.A. Telecom Solutions, ICOM, Greece
Alcatel Lucent Bell Labs, ALBLF, France
Instytut Chemii Bioorganiicznej Pan, PSNC, Poland
Interroute S.P.A, IRT, Italy
Telekom Deutschland Gmbh, TDG, Germany

© Copyright 2015, the Members of the SmartenIT Consortium
For more information on this document or the SmartenIT project, please contact:
Prof. Dr. Burkhard Stiller
Universität Zürich, CSG@IFI
Binzmühlestrasse 14
CH—8050 Zürich
Switzerland
Phone: +41 44 635 4331
Fax: +41 44 635 6809
E-mail: info@smartenit.eu

Version 1.0

Page 1 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Document Control
Title:

Experiments Definition and Set-up

Type:

Internal

Editor(s):

Roman Łapacz

E-mail:

romradz@man.poznan.pl

Author(s):

Jeremias Blendin, Valentin Burger, Paolo Cruschelli, David Hausheer, Fabian
Kaup, Roman Łapacz, Łukasz Łopatowski, George Petropoulos, Grzegorz
Rzym, Michael Seufert, Rafal Stankiewicz, Matthias Wichtlhuber, Piotr
Wydrych, Zbigniew Dulinsli, Krzysztof Wajda

Doc ID:

D4.2-v1.0

AMENDMENT HISTORY
Version

Date

Author

Description/Comments

V0.5

April 24, 2014

Roman Łapacz

First version, providing ToC

V0.6

June 6, 2014

David Hausheer

Include experiment mapping to test-bed

V0.9

Jul 26, 2014

Roman Łapacz

Initial information on experiments and show cases (copied from drafts
of overall assessment cards)

V0.9

Jan 25, 2015

Michael Seufert

RB-HORST experiments

V0.9.1

Mar 23, 2015

George Petropoulos

RB-HORST experiments

V0.9.1-0.1

Mar 27, 2015

Input and updates from the partners.

V0.9.1-0.40

Apr 24, 2015

Michael Seufert, George Petropoulos,
Fabian Kaup, Łukasz Łopatowski,
Grzegorz Rzym, Rafał Stankiewicz,
Jeremias
Blendin,
Matthias
Wichtlhuber, Roman Łapacz

V0.9.1-0.41

Apr 28, 2015

Michael Seufert, George Petropoulos,
Fabian Kaup, Łukasz Łopatowski,
Grzegorz Rzym, Rafał Stankiewicz,
Jeremias Blendin, Matthias
Wichtlhuber, Roman Łapacz

Updates after the D4.2 internal review

Apr 30, 2015

Roman Łapacz

Final version submitted to the EC

V0.9.1-0.49

V1.0

Legal Notices
The information in this document is subject to change without notice.
The Members of the SmartenIT Consortium make no warranty of any kind with regard to this document,
including, but not limited to, the implied warranties of merchantability and fitness for a particular purpose. The
Members of the SmartenIT Consortium shall not be held liable for errors contained herein or direct, indirect,
special, incidental or consequential damages in connection with the furnishing, performance, or use of this
material.

Page 2 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table of Contents
1 Executive Summary

5

2 Introduction

6

2.1
2.2

Purpose of the Document D4.2
Document Outline

3 Experiments
3.1

3.2

7

OFS Experiments
3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case
3.1.2 Evaluation of multi-domain traffic cost reduction in DTM: M-to-M case
EFS Experiments
3.2.1 Evaluation of caching functionality in RB-HORST
3.2.2 Large-scale RB-HORST++ Study
3.2.3 Evaluation of data offloading functionality in RB-HORST

4 Showcases
4.1

4.2

4.3

6
6
7
17
23
30
33
36
39

44

Multi-domain network traffic optimization in DTM
4.1.1 Scenario topology
4.1.2 Scenario assumptions
4.1.3 Reference scenario
4.1.4 Showcase scenario
Locality, social awareness and WiFi offloading in RB-HORST
4.2.1 Scenario topology
4.2.2 Scenario assumptions
4.2.3 Reference scenario
4.2.4 Showcase scenario
Mobile Internet Access Offloading in EEF/RB-HORST
4.3.1 Scenario topology
4.3.2 Scenario assumptions
4.3.3 Reference scenario
4.3.4 Showcase scenario

44
44
47
48
48
51
51
52
52
53
56
57
58
58
58

5 Summary

60

6 SMART Objectives

61

7 References

64

8 Abbreviations

65

9 Acknowledgements

66

Version 1.0

Page 3 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

(This page is left blank intentionally.)

Page 4 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

1 Executive Summary
This deliverable D4.2 – “Experiments Definition and Set-up” presents a detailed
description of experiments representing SmartenIT scenarios, both Operator Focused
(OFS) and End-user Focused (EFS), as defined in WP1 and matching the use-cases
proposed in WP2. WP3 selected and implemented two network traffic management
mechanisms for SmartenIT, namely DTM and RB-HORST, hence the experiments
described in this document are defined in order to evaluate these two solutions over the
SmartenIT test-beds.
Each experiment definition contains the following parts:

Goal – the overall concept and purpose of an experiment,

Deployment infrastructure – network topology and configuration,

Parameters, measurements and metrics – details needed to evaluate the quality of
SmartenIT mechanisms and the implementation,

Test procedures – actions to execute the implemented mechanisms.

Such a format has been formalized to hand over complete instructions on how to run the
SmartenIT experiments and which metrics an parameters need to be collected from the
evaluation of prototype in order to properly assess the SmartenIT solutions.
The authors decided to focus on small set of experiments covering the challenges
addressed by the SmartenIT project. The experiments must clearly and accurately
evaluate the project solutions and the quality of pilot implementation.
Also, this deliverable reports on preliminary showcases. The project team demonstrated
running pilot implementation and major functionalities during the second year technical
review with the EC. Showcases can be considered as preliminary experiments aimed at
showing the basic behaviour of SmartenIT network traffic mechanisms in a test-bed
environment. The experience collected in preparation of the showcases was an important
input to the work on the final advanced experiments documented in this deliverable.

Version 1.0

Page 5 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

2 Introduction
The goal of this document is to provide definitions of SmartenIT experiments for the
evaluation of prototypes. Presented details instruct how to evaluate the network traffic
management mechanisms of SmartenIT scenarios, both Operator Focused Scenario
(OFS) and End-user Focused Scenario (EFS), as proposed in WP2 of the SmartenIT
project. Apart from test procedures, experimenters are equipped with sets of parameters,
metrics, test-bed configurations and other information to properly execute experiments.
The key requirement of each experiment is the use of prototype implementation created in
WP3. This will allow to evaluate algorithms of network traffic management mechanisms as
well as the quality of software implementation.
At the end of year 2, the project team prepared the showcases presenting the behaviour of
two network traffic mechanisms: DTM and RB-HORST. These are also reported in this
document since the experience achieved during preparation of showcases was an
important input to the further work on experiment definitions.

2.1 Purpose of the Document D4.2
Deliverable D4.2 is a guide for those who are going to execute the SmartenIT
experiments. The software pilot implementation deployed in test-bed infrastructures must
present whether the mechanisms developed by the project meet the requirements and
thus address the challenges defined in the project. Execution of experiments defined in
this document will provided a set of data required to conduct the evaluation and
assessment actions. Moreover, the description of showcases which are simplified versions
of experiment scenarios help to easily understand how the pilot implementation and the
selected SmartenIT network traffic management mechanisms work.

2.2 Document Outline
This document is organized as follows:
Section 3 provides detailed information about the experiments which are planned to be
executed. The results of experiments will be an input to the assessment process and final
project conclusions. This section brings to experimenters all information to properly adjust
a test-bed environment, configure the SmartenIT prototype, run test procedures and
collect the results. As the project is focused on two scenarios, OFS and EFS, each of them
is represented by a set of experiments. The OFS experiments evaluate the DTM
mechanism, the EFS experiments are focused on RB-HORST.
Section 4 describes the showcases with the SmartenIT pilot implementation which were
presented in the Year 2 Technical Review with the EC. All details of three showcases (one
for DTM, one for RB-HORST and one for RB-HORST with The Energy Efficiency
Measurement Framework) have been reported.
Section 5 summarizes the deliverable and draws the major conclusions on the defined
experiments and next steps of the evaluation process.
Section 6 reports on how SMART objectives, as described in SmartenIT’s Description of
Work (DoW) [1], have been addressed by the work performed in WP4.

Page 6 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

3 Experiments
In this section, the experiment definitions of two types, namely OFS and EFS, are
described in details. The experiments are defined is a such way to validate the pilot
implementation developed in the SmartenIT project.

3.1 OFS Experiments
For OFS experiments a "bulk data transfer for cloud operators" use-case is taken. In all
OFS experiments we assume that DTM is used to manage the traffic by ISP which hosts
cloud or data center that is receiving the traffic from one or more cloud/data centers. This
ISP has two inter-domain links and strives to distribute the traffic among those links in
such a way that total traffic cost is minimized.
A set of experiments to evaluate functionality and performance of DTM is planned. The
experiments may be classified according to the following two main settings:

the number of cloud/DCs sending or receiving the manageable traffic

the type of tariff used for calculation of cost on inter-domain links.

In the former dimension we can distinguish two groups of experiments:

Single-to-Single (S-to-S): traffic is generated by single source (single cloud/DC) and
sent to a single receiver (single cloud/DC). In this case there is a single ISP hosting
DC that receives the traffic. The ISP's domain is multi-homed and manages two
inter-domain links. The manageable traffic is generated by a single DC located in
remote ISP's domain.

Multiple-to-Muliple (M-to-M): There are two ISPs' autonomous systems each
hosting a DC that receives the manageable traffic, i.e., there are two ASes that
perform traffic management using DTM. Both ASes are multi-homed (each has two
inter-domain links). There are also two DCs located in two distinct remote ISPs'
domains serve as manageable traffic sources.

The second main experiment classification criteria is based on the tariff used for billing the
traffic on inter-domain links. For each of the above two groups of experiments we plan to
execute performance evaluation separately for volume-based tariff and 95th percentile
tariff.
Another dimension of experiment classification is a distinction of functionality tests and
performance evaluation tests. The former will be a short experiments to evaluate the
mechanism itself and whether the whole test-bed environment operates correctly. The
latter type of experiments will be used to evaluate the performance of DTM in a particular
test-bed configuration and under a few configuration settings to prove the benefits of using
the mechanism. Both qualitative and quantitative metrics and KPIs will be carefully
evaluated in this case.
The current status of specification and implementation of DTM++ does not allow to define
experiments. If it will be possible, experiment with S-to-S topology and 95th percentile tariff
is planned and will be presented in D4.3.
Common tools and settings
In OFS test-bed experiments we consider one or two domains (autonomous systems) that
perform traffic cost optimization and traffic management using DTM. Since DTM
Version 1.0

Page 7 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

operations are possible only if an AS is multi-homed it was assumed that each domain
performing DTM has two inter-domain links. Inbound traffic management is performed, i.e.
DCs receiving the traffic from remote DCs are hosted by AS-es that use DTM. For
simplicity we assume that domains hosting DCs being sources of the traffic are singlehomed. This assumption on test-bed construction does not impact on goals and scope of
the experiments.
There are two types of traffic: background traffic (non-manageable) and inter-DC
manageable traffic. The former is sent over inter-domain links and is dominating but DTM
does not influence this traffic. Background traffic is sent over default BGP paths. The
manageable traffic is sent between remote Data Centers. It consist of multiple flows of
various sizes [2] [3].
Physical test-bed configurations
There are three physical test-bed environments deployed: at TUD, PSNC and IRT
premises. All test-bed instances are compatible with the basic test-bed described in D4.1
and use some necessary extensions as detailed in the following [4].
Basic test-bed instance installed at TUD uses three physical servers. Each machine is
equipped with one Intel Xeon E5-1410 CPU @2.8GHz (4 cores, 8 threads) and 16GB of
RAM. Two of them are provided with 2TB Toshiba SATA3 enterprise HDD, last one with
1TB Seagate enterprise hard drive. Servers are interconnected with four physical 1Gbps
NICs as described in D4.1.
The mapping of logical topology to physical machines in TUD test-bed is presented at
Figure 1. It is presented on the example of the most complex logical topology, i.e., the one
for M-to-M group of experiments. Mapping for S-to-S, S-to-M and M-to-S can be obtained
by simply removing unused devices in logical topology and respective virtual machines in
physical test-bed.
The test-bed installed at PSNC premises uses only 2 powerful servers instead of the 3
physical machines proposed in the reference basic test-bed design. Server 1 is equipped
with 2 CPUs @2,4GHz with 6 cores each as well as 48GB of RAM. Server 2 comprises
the same number of CPUs and cores with total amount of 64GB of RAM. Servers are
interconnected with two physical Ethernet 1Gb/s links.
There are in total 28 virtual machines deployed in the test-bed (14 VMs on each server)
what allows for conducting DTM M-to-M experiments. VMs hosting SmartenIT prototype
software as well as traffic generators and receivers (for both inter-DC and background
traffic emulation) are running Ubuntu 14.04 64bit operating systems.
IRT test-bed follows the same strategy of PSNC test-bed, with a two physical server
deployment. The environment is described as follows: first server is equipped with four
CPU (X3210 @ 2.13GHz) and with 8GB RAM and 1TB of HDD disk space, while second
server is equipped with 32CPU (CPU E5-2450 @ 2.10GHz), 64G RAM and 1TB of HDD
disk space. All VM residing over two servers related to the SmartenIT project have been
created starting from Ubuntu 14.04 (x64). Servers have been connected over two
dedicated 1Gbps link, while management has been provided over separated link.
Apart from being compatible with the basic test-bed design, test-bed implements a set of
required extensions described in D4.1.

Page 8 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Cloud A
DC-A

Traffic
receiver RA1

BG-1.1

Traffic
generator
(sender) GA1

PC3

Cloud B
DC-B

AS3
DA-1

Traffic
receiver

AS2
BG-1.2

AS1

Traffic
receiver RA2

PC1

S-Box

Cloud C
DC-C

DC traffic
generator
sender

Traffic
generator
(sender) GC1

SDN controller

S-Box

Traffic
Receiver RC1
Traffic
generator
(sender) GA2

Cloud D
DC-D

PC2

AS5

DA-4

DC traffic
generator
sender

BG-4.1

Traffic
receiver

AS4
BG-4.2
Traffic
receiver RC2

Traffic
generator
(sender) GC2

SDN controller

S-Box

S-Box

Figure 1 Mapping of M-to-M experiments logical topology to physical test-bed configuration
(TUD test-bed instance).
In order to emulate inter-DC traffic a custom traffic generator (described below) was
deployed on selected VMs. More detailed information about the generator application is
provided later on in this section.
In order to enable sophisticated networking configuration inside the test-bed, software
router test-bed extension is implemented. In total 10 software router VMs running Vyatta
VC6.6R1 are deployed as a full set of test-bed (multi-to-multi case). Single-to-single
scenario will be obtained by powering off VMs belonging to AS4 and AS5.
Moreover basic test-bed extension for OpenFlow-enabled switch has been incorporated.
For this purpose Open vSwitch software has been set up on two dedicated VMs.
The mapping of logical topology to physical machines in PSNC test-bed is presented at
Figure 2.
Traffic generator
During the experiments, an Internet-like traffic generator is used and it is able to feed the
network with distinct unidirectional UDP flows, each being handled by a separate Java
thread. The configuration of the generator comprises of:

a definition of flow inter-arrival time distribution, and

an unlimited number of flow templates.

Version 1.0

Page 9 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Cloud A
DC-A

Traffic
receiver RA1

BG-1.1

Traffic
generator
(sender) GA1

Cloud B
DC-B

AS3
DA-1

Traffic
receiver

AS2
BG-1.2

AS1

Traffic
receiver RA2

PC1

S-Box

Cloud C
DC-C

DC traffic
generator
sender

Traffic
generator
(sender) GC1

SDN controller

PC2

S-Box

Traffic
Receiver RC1
Traffic
generator
(sender) GA2

Cloud D
DC-D
AS5

DA-4

DC traffic
generator
sender

BG-4.1

Traffic
receiver

AS4
BG-4.2
Traffic
receiver RC2

Traffic
generator
(sender) GC2

SDN controller

S-Box

S-Box

Figure 2 Mapping of M-to-M experiments logical topology to physical test-bed configuration
(PSNC test-bed instance).

Each time a flow is started, a template is selected and applied. Each template is
configured by:

its selection probability,

an unlimited number of flow destinations (i.e., IP address and port range),

a definition of flow length (i.e., time from start to last packet sending) distribution,

a definition of packet inter-arrival time distribution,

a definition of UDP payload size (in bytes) distribution,

a definition of UDP payload type (e.g., random bytes or zeros).

The advantages of the used generator over other available tools are stability, robustness,
efficiency, and portability. It uses a stable Well19937c pseudo-random number generator.
It has been tested for 24/7 operation stability with and without a receiver and it was verified
that each instance is able to generate dozens Mbps of traffic. A sample configuration file is
presented below:

Page 10 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

<?xml version="1.0" encoding="UTF-8"?>
<!-- $Id: background-1.xml 246 2014-12-18 15:41:55Z wydrych $ -->
<flows>
<flow-inter-arrival-time distribution="exponential">
<param name="mean" value="219" />
</flow-inter-arrival-time>
<flow-template weight="0.6">
<destinations>
<destination weight="1" address="10.10.1.3" min-port="10000" max-port="39999" />
</destinations>
<flow-length distribution="pareto">
<param name="scale" value="3400" />
<param name="shape" value="1.5" />
</flow-length>
<packet-inter-arrival-time distribution="exponential">
<param name="mean" value="35" />
</packet-inter-arrival-time>
<payload-size distribution="normal">
<param name="mu" value="1358" />
<param name="sigma" value="25" />
</payload-size>
<payload type="zero" />
</flow-template>
<flow-template weight="0.4">
<destinations>
<destination weight="1" address="10.10.1.3" min-port="10000" max-port="39999" />
</destinations>
<flow-length distribution="pareto">
<param name="scale" value="3400" />
<param name="shape" value="1.5" />
</flow-length>
<packet-inter-arrival-time distribution="exponential">
<param name="mean" value="35" />
</packet-inter-arrival-time>
<payload-size distribution="normal">
<param name="mu" value="158" />
<param name="sigma" value="20" />
</payload-size>
<payload type="zero" />
</flow-template>
</flows>

Billing period and cost functions
Four different lengths of a billing period will be used for experiments: 30 minutes, a few
hours, 1 day and a few days (up to one week). For billing periods shorter than 1 day the
traffic envelope will be flat. In experiments with billing period of 1 day or longer a daily
traffic envelope will be introduced. Short billing periods (30 minutes and a few hours) will
be used mainly for functionality tests and possibly for basic performance tests. Main
experiments for DTM performance evaluation will be conducted with 1 day or longer billing
period.
Another important setting for experiments is selection of cost functions used on links.
Generally, piecewise linear functions will be used but particular settings will be carefully
selected for each experiment.
Measurement points
For the purposes of performance evaluation of DTM, adequate traffic measurements are
needed. The measurement points are presented at Figure 3. For each autonomous
system that receives the traffic and uses DTM for cost minimization the traffic
measurements must be done on each inter-domain link and each tunnel. On input

Version 1.0

Page 11 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

interfaces of border gateway routers (BG) we measure the total (background +
manageable) traffic passing a given link. On DA router (Data Center Attachment point [5])
we measure the manageable traffic incoming via each tunnel. Therefore, assuming that
AS has two inter-domain links and two tunnels, in total four measurement points must be
defined. This is the case of S-to-S experiments. For more complex scenarios where there
is more than one domain generating the traffic and more tunnels, the number of
measurement points increases. Detailed list of measurement points for each experiment is
provided in respective subsections below.
The traffic measurements are realized by polling interface statistics via SNMP using a
dedicated application independent from the SmartenIT traffic management software. The
frequency of measurements may vary between experiments. After collecting the results,
they are correlated and analysed using both dedicated applications and generic data
mining tools.
tun 1 traffic
(manageable)

L1 total traffic
(background+manageable)

DA

BG1
BG2

AS

S-Box

tun 2 traffic
(manageable)

L2 total traffic
(background+manageable)

Figure 3 Vantage points for traffic measurements in DTM test-bed.
Performance metrics and KPIs
This section extends KPI definitions provided in deliverable D4.1. We define performance
metrics and KPIs as well as notation for measured values separately for experiments with
volume based tariff and 95th percentile based tariff. The common and general notation
encompasses:

length of the billing period: 𝑇

total traffic on inter-domain link (sum of background and manageable traffic): 𝑋

manageable traffic (sent via tunnels): 𝑍

Page 12 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

For simplicity of description we consider a single multi-homed autonomous system that
manages the traffic using DTM (as presented at Figure 3). Description of performance
metrics in this section and notation used adheres to a single AS having two inter-domain
links and two tunnels for manageable traffic. Performance metrics and KPIs introduced
here are general. Variations dependent on a specific experiment setting will be described
in respective sections dedicated to experiments' definitions.
We define two types of metrics:

point metrics that represent actual benefits of DTM and are calculated for ended
billing period

live metrics that are observed during the billing period and allows to estimate the
performance of DTM in real-time.

Two important metrics are expected absolute value of inter-domain traffic cost on each
link and summary cost. Those values are calculated as follows: 𝑫𝑅𝑚
= 𝑓𝑚 (𝑅𝑚 ) 𝑫𝑅
= ∑ 𝑫𝑅𝑚 𝑚

where 𝑅𝑚 is m-th component of reference vector 𝑅⃗ .
Some KPIs defined below will refer to this expected absolute value of cost incurred by
transferred traffic.
Total volume based tariff
The basic metric is a total amount of traffic transferred via inter-domain links during the 𝑉

billing period. The total traffic accumulated on link 𝑚 is denoted by 𝑋𝑚
(where 𝑚 ∈ {1,2} for
AS having two inter-domain links). There are cost functions defined for each inter-domain
link. They are denoted as 𝑓𝑚 (∙). The actual cost of the traffic send via link 𝑚 is calculated
as: 𝑉
) 𝐷𝑚
= 𝑓𝑚 (𝑋𝑚

The total cost the ISP pays for inter-domain traffic in a billing period is 𝐷 = ∑𝑚 𝐷𝑚 .
The total amount of manageable traffic that was received via inter-domain link 𝑚 is 𝑉

denoted by 𝑍𝑚
. If DTM were not used all the manageable traffic from DC serving as a
source of the traffic to the DC receiving the traffic would pass a default BGP path. This
observation leads to a definition of the first KPI:
ξ

(1) 𝑓

1 (𝑋1𝑉 ) + 𝑓2 (𝑋2𝑉 )
= 𝑓
1 (𝑋1𝑉 + 𝑍2𝑉 ) + 𝑓2 (𝑋2𝑉 − 𝑍2𝑉 )

ξ

(2) 𝑓

1 (𝑋1𝑉 ) + 𝑓2 (𝑋2𝑉 )
= 𝑓
1 (𝑋1𝑉 − 𝑍1𝑉 ) + 𝑓2 (𝑋2𝑉 + 𝑍1𝑉 )

KPI ξ(1) denotes the relative monetary gain of using DTM. It is a ratio of total cost with
traffic management to the total cost without traffic management, i.e., the case when a
default BGP path is used (all manageable traffic passes inter-domain link 1). In turn ξ(2)
denotes a monetary gain of balancing the traffic with DTM instead of using link 2 as default
Version 1.0

Page 13 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

BGP path. If both values are lower than 1 it means that an ISP benefits from using DTM
regardless which link would be used as default BGP path. If for instance ξ1 is greater or
equal to 1 that means that it is better for ISP to transfer all manageable traffic via link 1,
i.e., use a this link as a default BGP path.
In turn, the absolute cost benefit (or loss) from using DTM is expressed as:
Δ𝐷(1) = 𝑓1 (𝑋1𝑉 + 𝑍2𝑉 ) + 𝑓2 (𝑋2𝑉 − 𝑍2𝑉 ) − 𝑓1 (𝑋1𝑉 ) − 𝑓2 (𝑋2𝑉 )
or
Δ𝐷(2) = 𝑓1 (𝑋1𝑉 − 𝑍1𝑉 ) + 𝑓2 (𝑋2𝑉 + 𝑍1𝑉 ) − 𝑓1 (𝑋1𝑉 ) − 𝑓2 (𝑋2𝑉 )
if link 1 or 2 is considered as a default path, respectively.
Another KPI represents relation of achieved cost to the cost expected if the achieved
distribution of traffic among links was exactly equal to the reference vector. It can be
defined as: 𝜌
= 𝐷 𝑫𝑅

Live performance metrics are built on a periodic traffic measurements during the billing
period. Let assume that the billing period of length 𝑇 is divided into 𝑁 measurement 𝑇

periods of length Δ𝑡, where 𝑚𝑜𝑑 Δ𝑡 = 0. Let's denote by 𝜏𝑖 the time that elapsed from the
beginning of the current billing period to the moment of collection of i-th measurement,
where 𝑖 ∈ [1, 𝑁]. In other words, 𝜏𝑖 = 𝑖 ∗ Δ𝑡. 𝑉

At each point of time 𝜏𝑖 the accumulated traffic volume denoted as 𝑥𝑚,𝑖
is measured. Given
traffic volume at 𝜏𝑖 and the length of the billing period 𝑇 the total volume on link 𝑚
expected by the end of the billing period is estimated by linear approximation as: 𝑉 𝑋

̂𝑚,𝑖 = 𝑥𝑚,𝑖 𝑇 𝜏𝑖

Then the cost of the traffic on link 𝑚 expected by the end of the billing period estimated at
time 𝜏𝑖 is calculated as
̂𝑚,𝑖 = 𝑓𝑚 (𝑋̂𝑚,𝑖 ) 𝐷

where 𝑓𝑚 (∙) is a cost function on that link.
The idea of a linear approximation is presented at Figure 4. The same procedure is
repeated for each inter-domain link.
Finally for a multi-homed domain having 𝑚 inter-domain links the estimation at time 𝑡𝑖 of
̂𝑖 = ∑𝑚 𝐷
̂𝑚,𝑖 . This method was used
the total cost the ISP expects to pay is calculated as 𝐷
for presentation of cost estimation in the showcase during second year project review.

Page 14 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Link 1 Volume [MB] 𝑋

̂1,𝑖 𝑋
̂1,𝑖+1 𝜏𝑖 𝜏𝑖

+1

̂1,𝑖 𝐷

̂1,𝑖+1 𝐷 𝑇

Figure 4 Estimation of costs on an inter-domain link for volume based tariff.
95th percentile based tariff
In the case of 95th percentile rule tariff the whole billing period is divided into a number of
5-minute long intervals. In each interval the amount of traffic transferred is measured. At
the end of the billing period the smallest sample (in the ordered list of samples) such that
95% percent of samples are less than or equal to that value is found. This traffic sample is
used to calculate the cost of the traffic. Therefore the actual cost remains unknown until
the last sample is in the billing period collected.
Let's define 𝒳𝑚𝐴 = {𝑋𝑚,1 , ⋯ , 𝑋𝑚,𝑖 , ⋯ , 𝑋𝑚,𝑁 } as a set of 5-minute samples, where 𝑚 denotes
a link number and 𝑖 is a sample sequential number in the order of sample collection, i.e.
sample 1 is the first sample collected in the billing period, sample 𝑖 is the sample collected
when time 𝑖 ∗ 𝑡𝑠 elapsed from the beginning of the billing period. The cardinality of set 𝒳𝑚𝐴 𝑇 𝐴

equals 𝑁 = 𝑡 . Define also a set 𝒵𝑚
= {𝑍𝑚,1 , ⋯ , 𝑍𝑚,𝑖 , ⋯ , 𝑍𝑚,𝑁 } that contains samples of 𝑠

manageable traffic. Element 𝑍𝑚,𝑖 represents the amount of manageable traffic in the
sample 𝑋𝑚,𝑖 .
We introduce a sorting function 𝑆: [1, N] → [1, 𝑁] such that 𝑋𝑚,𝑆(𝑙) ≥ 𝑋𝑚,𝑆(𝑙+1) . We define a 𝐻

set 𝒳𝑚
which contains highest 5-minute samples collected during the billing period on link 𝑚
: 𝐻 𝒳𝑚

= {𝑋𝑚,𝑗 : 𝑋𝑚,𝑗 ∈ 𝒳𝑚𝐴 ∧ 𝑗 = 𝑆(𝑙) ∧ 𝑙 ∈ [1, 𝐾]}, 𝑇 𝑇 𝑠 𝑠 𝐻 𝐻

where 𝐾 = 𝑡 − ⌈0.95 𝑡 ⌉ + 1. A set 𝒳𝑚
is a subset of 𝒳𝑚𝐴 . Each element of set 𝒳𝑚
is higher 𝐻

than any element of a set 𝒳𝑚𝐴 \𝒳𝑚
. 𝐻

The smallest element of set 𝒳𝑚
is a sample considered for billing basis on link 𝑚, i.e., it
represents the amount of traffic for which ISP pays according to 95th percentile rule.
95 𝐻

95
Therefore, the value taken for cost calculation is defined as 𝑋𝑚
= min𝒳𝑚
. Also, 𝑋𝑚
= 𝐻 𝑋𝑚

,ℎ where ℎ is the sequential number of a smallest sample in set 𝒳𝑚 . Finally, the cost of
inter-domain traffic on link 𝑚 is calculated as
95 ) 𝐷𝑚
= 𝑓𝑚 (𝑋𝑚

The total cost the ISP pays for inter-domain traffic in a billing period is 𝐷 = ∑𝑚 𝐷𝑚 .
Version 1.0

Page 15 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Since all 5-minute samples are collected (set 𝒳𝑚𝐴 ) and for each sample we know how 𝐴

much manageable traffic it contains (set 𝒵𝑚
) it is possible to predict the cost of the traffic
that would be incurred if DTM was not used. However, the procedure of finding this cost is
more complex than for the case of volume based tariff.
Let's assume that there are two inter-domain links a default BGP path is link 1. As defined 𝐴
,(1) 𝐴
,(1) 𝐴

above sets 𝒳𝑚𝐴 and 𝒵𝑚
contain samples for the case with DTM. By 𝒳1
and 𝒳2
we
denote the expected sets of samples collected on link 1 and link 2, respectively, if DTM
was not used and a default BGP path was link 1. These sets can be predicted as follows: 𝐴
,(1)

= {𝑋1,1 + 𝑍2,1 , ⋯ , 𝑋1,𝑖 + 𝑍2,𝑖 , ⋯ , 𝑋1,𝑁 + 𝑍2,𝑁 } 𝐴

,(1)

= {𝑋2,1 − 𝑍2,1 , ⋯ , 𝑋2,𝑖 − 𝑍2,𝑖 , ⋯ , 𝑋2,𝑁 − 𝑍2,𝑁 } 𝒳

1 𝒳
2 𝐻

,(1)

The next step is to find corresponding sets of 𝐾 highest samples on each link: 𝒳1
and 𝐻
,(1) 𝒳
2 . Then the smallest samples in each set are found. Sizes of those samples are used
to calculate the cost that an ISP would have to pay without using DTM. Similarly to the
approach presented for total volume based tariff the following KPIs can be defined:

relative monetary gain of using DTM instead of using link 1 as a default BGP path:
(1)

ξ

= 𝑓

1 (𝑋195 ) + 𝑓2 (𝑋295 ) 𝑓
1 (min 𝒳1𝐻,(1) ) + 𝑓2 (min 𝒳2𝐻,(1) )

absolute cost benefit (or loss) from using DTM is expressed as:
Δ𝐷(1) = 𝑓1 (min 𝒳1𝐻,(1) ) + (min 𝒳2𝐻,(1) ) − 𝑓1 (𝑋195 ) − 𝑓2 (𝑋295 )

If we assume that link 2 belongs to a default BGP path, then expected traffic samples
without DTM are defined as follows: 𝐴
,(2)

= {𝑋1,1 − 𝑍1,1 , ⋯ , 𝑋1,𝑖 − 𝑍1,𝑖 , ⋯ , 𝑋1,𝑁 − 𝑍1,𝑁 } 𝐴

,(2)

= {𝑋2,1 + 𝑍1,1 , ⋯ , 𝑋2,𝑖 + 𝑍1,𝑖 , ⋯ , 𝑋2,𝑁 + 𝑍1,𝑁 } 𝒳

1 𝒳
2

Consequently, we can define KPIs for that case:

relative monetary gain of using DTM instead link 2 as a default path:
ξ(2) =

 𝑓

1 (𝑋195 ) + 𝑓2 (𝑋295 ) 𝑓
1 (min 𝒳1𝐻,(2) ) + 𝑓2 (min 𝒳2𝐻,(2) )

absolute cost benefit (or loss) from using DTM is expressed as:
Δ𝐷(2) = 𝑓1 (min 𝒳1𝐻,(2) ) + 𝑓2 (min 𝒳2𝐻,(2) ) − 𝑓1 (𝑋195 ) − 𝑓2 (𝑋295 )

The KPI representing the relation of the cost achieved to the cost expected if the achieved
distribution of traffic among links was exactly equal to the reference vector is also valid for 𝐷

95th percentile tariff: 𝜌 = 𝑫𝑅
Similarly to volume based tariff case the real-time estimation of traffic cost will be done
during the billing period. The methodology for 95th percentile tariff is different.
Page 16 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

As mentioned at the beginning of this section to calculate the cost of inter-domain traffic it 𝑇 𝑇

is necessary to a set know the size of the smallest sample in a set of 𝐾 = 𝑡 − ⌈0.95 𝑡 ⌉ + 1 𝑠 𝑠

highest samples. Therefore, to estimate the expected traffic cost during a billing period we
need to collect samples every 5 minutes and update a set of 𝐾 highest samples. Let's
define a temporary set of samples as 𝐴 𝒳𝑚

,𝑖
= {𝑋𝑚,1 , ⋯ , 𝑋𝑚,𝑖 } 𝐴 𝐴

where |𝒳𝑚,𝑖
| = 𝑖. 𝒳𝑚,𝑖
is a set of 𝑖 samples collected on link 𝑚 from the beginning of the
current billing period until time 𝑖 ∗ 𝑡𝑠 . Thus, the set is updated every 5 minutes. Additionally, 𝐴

corresponding sets of manageable traffic samples on each link are collected: 𝒵𝑚,𝑖
=
{𝑍𝑚,1 , ⋯ , 𝑍𝑚,𝑖 }. First estimation of expected cost is possible when at least 𝐾 samples are 𝐻

collected. Then, every 5 minutes a set 𝒳𝑚,𝑖
of highest samples is found. We introduce a 𝐻

set of sorting functions 𝑆𝑖 : [1, 𝑖] → [1, 𝑖] that 𝑋𝑚,𝑆𝑖 (𝑙) ≥ 𝑋𝑚,𝑆𝑖 (𝑙+1) . We define a set 𝒳𝑚,𝑖
which
contains highest 5-minute samples collected on link 𝑚 from the beginning of the current
billing period until time 𝑖 ∗ 𝑡𝑠 : 𝐻 𝐴 𝒳𝑚

,𝑖
= {𝑋𝑚,𝑗 : 𝑋𝑚,𝑗 ∈ 𝒳𝑚,𝑖
∧ 𝑗 = 𝑆𝑖 (𝑙) ∧ 𝑙 ∈ [1, 𝐾]} 𝐻

Then the smallest sample in set 𝒳𝑚,𝑖
is taken as a current (𝑖-th)estimate of traffic for which
the operator will pay: 𝐻 𝑋

̂𝑚,𝑖 = min 𝒳𝑚,𝑖

Then the cost of the traffic on link 𝑚 expected by the end of the billing period estimated at
time 𝑖 ∗ 𝑡𝑠 is calculated as
̂𝑚,𝑖 = 𝑓𝑚 (𝑋̂𝑚,𝑖 ) 𝐷

Note that after all 𝑁 samples are collected (the end of billing period) the estimated cost 𝐻 𝐻

̂𝑚,𝑁 = 𝐷𝑚 since 𝒳𝑚,𝑁
equals the actual one: 𝐷
= 𝒳𝑚
.
Finally for a multi-homed domain having 𝑚 inter-domain links the estimation at time 𝑡𝑖 of
̂𝑖 = ∑𝑚 𝐷
̂𝑚,𝑖 .
the total cost the ISP expects to pay is calculated as 𝐷
3.1.1 Evaluation of multi-domain traffic cost reduction in DTM: S-to-S case
The goal of this experiment is to evaluate DTM functionality and performance. The usecase considered is "Bulk data transfer for cloud operators”. The logical topology for this
experiment is presented in Figure 5. There are two domains hosting DCs: AS1 and AS3.
Data center located at AS3 (DC-B) serves as a source of manageable traffic, while DC-A
receives the traffic. AS1 performs management of inbound inter-domain traffic to reduce
costs of inter-domain traffic. Using DTM it influences distribution of manageable traffic
among two inter-domain links (L1 and L2) in a cost efficient way. It is achieved by selecting
one of the tunnels (tun 1 or tun 2) for flows originated at DC-B.

Version 1.0

Page 17 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Traffic
generator
(sender)

Cloud A
DC-A

AS2

Traffic
receiver

AS3
DA-1

Traffic
receiver

Cloud B
DC-B
DC traffic
generator
sender

BG-1.1
BG-1.2

AS1

SDN controller
Traffic
receiver

S-Box

S-Box
Traffic
generator
(sender)

BGP router
Intra-domain router
Inter-domain link
Intra-domain link

Figure 5 Logical topology for S-to-S experiment.
Deployment infrastructure
The deployment infrastructure for S-to-S experiment is presented in Figure 6. Addressing
scheme is presented in Table 1. Description of all virtual machines can be found
in Table 2.
Table 1 Detailed IP Address table for the production network for the DTM evaluation.
IP Address or Address Range

Usage

10.0.1.0/24

Interconnection ISP1-ISP2

10.0.2.0/24

Interconnection ISP1-ISP2

10.1.1.0/30

Interconnection ISP2-ISP3

10.10.1.0/24

ISP1

10.10.2.0/24

ISP1

10.10.3.0/24

ISP1

10.1.2.0/24

ISP2

10.1.5.0/24

ISP2

10.1.6.0/24

ISP2

10.1.3.0/24

ISP3

10.1.4.0/24

ISP3

Page 18 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 6 Deployment of virtual machines on three physical servers in test-bed
environment.

Version 1.0

Page 19 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 2 Hosts and the services running on them for DTM.
Host

Services

Comment

dtm-isp1-rtr-bg1

ISP router, interconnection to AS2 (link L1),
BG-1.1

Vyatta software router

dtm-isp1-rtr-bg2

ISP router, interconnection to AS2 (link L2),
BG-1.2

Vyatta software router

dtm-isp1-rtr-da

ISP router allowing connection with DC and
S-Box, DA-1

Vyatta software router

dtm-isp1-vmdc

Data Center receiving inter-domain traffic:
DC-A

dtm-isp1-vmsbox

S-Box

dtm-isp1-vmtr1

Receiver of background traffic passing interdomain link L1

dtm-isp1-vmtr2

Receiver of background traffic passing interdomain link L2

dtm-isp2-rtr-bg1

ISP router, interconnection to AS1 and AS3

Vyatta software router

dtm-isp2-rtr-bg2

ISP router, interconnection to AS1

Vyatta software router

dtm-isp2-vmtg1

Generator of background traffic passing interdomain link L1

dtm-isp2-vmtg2

Generator of background traffic passing interdomain link L2

dtm-isp3-rtr-bg1

ISP router, interconnection to AS2

dtm-isp3-ofda

OVS connected to SDN controller

dtm-isp3-vmdc

DC generating inter-domain traffic: DC-B

dtm-isp3-vmsbox

S-Box

dtm-isp3-sdn

SDN Controller

Vyatta software router

Parameters, Measurement and Metrics
Measurement points and measured metrics for S-to-S experiments and for two types of
tariff, volume based and 95th percentile based, are presented in Table 3 and Table 4,
respectively. Performance metrics and KPIs are shown in Table 5.
Based on collected statistics calculation of KPIs will be proceeded by external application
(e.g., MS Excel, Matlab, Mathematica). Achieved results will be presented in form of
graphs drawn in a known application (e.g., GNU Plot, Matlab, Mathematica, etc.).
Table 3 Measurement points and measured values: experiment with volume based tariff.
Measured value

Measurement point

Notation

Frequency

Temporary values of AS1 border routers:
total traffic on interBG-1.1 and BG-1.2
domain links 𝑉 𝑉 𝑥

1,𝑖
and 𝑥2,𝑖

Δ𝑡 = 30 𝑠

Temporary values of DA-1 router at AS1 𝑉 𝑉 𝑧

1,𝑖
and 𝑧2,𝑖

Δ𝑡 = 30 𝑠

Page 20 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

manageable traffic
Achieved values of
total traffic on interdomain links

AS1 border routers:
BG-1.1 and BG-1.2 𝑋

1𝑉 and 𝑋2𝑉

End of billing period: 𝑇 𝑍

1𝑉 and 𝑍2𝑉

End of billing period: 𝑇

Achieved values of
manageable traffic

DA-1 router at AS1

Compensation
vector

s-Box at AS1 𝐶

Δ𝑡 = 30 𝑠

Reference vector

s-Box at AS1 𝑅

End of billing period: 𝑇

Table 4 Measurement points and measured values: experiment
with 95th percentile based tariff.
Measured value

Measurement point

5-minute samples AS1 border routers:
of total traffic on
inter-domain links BG-1.1 and BG-1.2
Share of
manageable
traffic in 5-minute
samples
Sets of samples
on each interdomain link by
the end of billing
period and the
size of sample
used for billing

DA-1 router at AS1

AS1 border routers:
BG-1.1 and BG-1.2

Sets of samples
of manageable
traffic by the end
of billing period

DA-1 router at AS1

Compensation
vector
Reference vector

Notation
Elements 𝑖 of sets: 𝐴 𝐴 𝒳

1,𝑖
and 𝒳2,𝑖

Elements 𝑖 of sets: 𝐴 𝐴 𝒵

1,𝑖
and 𝒵2,𝑖

Sets 𝒳1𝐴 and 𝒳2𝐴
and
samples 𝑋195 and 𝑋295

Frequency 𝑡𝑠
= 5 min 𝑡𝑠

= 5 min

End of billing period: 𝑇 𝒵

1𝐴 and 𝒵2𝐴

End of billing period: 𝑇

s-Box at AS1 𝐶

Δ𝑡 = 30 𝑠

s-Box at AS1 𝑅

End of billing period: 𝑇

Version 1.0

Page 21 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 5 Performance metrics and KPIs.
Total cost with DTM 𝐷

, 𝐷1 and 𝐷2

Cost expected without DTM if link 1 was a
default path, and absolute benefit 𝐷

(1) , 𝐷1 , 𝐷2 , Δ𝐷(1)

Cost expected without DTM if link 1 was a
default path 𝐷

(2) , 𝐷1 , 𝐷2 , Δ𝐷(2)

Cost estimated during the billing period

̂𝑖 , 𝐷
̂1,𝑖 and 𝐷
̂2,𝑖 𝐷

Relative gain of using DTM

ξ(1) and ξ(2)

(1)

(1)

(2)

(2)

Ration of the cost achieved to the cost
expected according to reference vector
(accuracy of optimization) 𝜌

= 𝐷 𝑫𝑅

Test procedures
The following three main stages of experiment are defined.
Stage 1 – Functionality test
The main purpose of functionality test is a coarse observation of procedures of DTM
mechanism and evaluation of it in order to validate the implementation. For such test
billing period should be setup to one hour. Traffic envelope of generators (both,
background and DC-DC traffic) should be flat. After two trial billing period (system warm
up), during third hour some burst of background traffic will be manually injected to the
network in order to evaluate proper compensation of it. The functionality test will be
performed for volume and 95th percentile based tariff test-bed setup.
Stage 2 – Performance evaluation test for volume based tariff
Performance evaluation of DTM for volume based tariff will be performed with usage of
KPIs. Billing period should be setup to one week. Since DTM will be started without any
initial setup (initial values of Reference and Compensation vectors) first week is needed to
collect traffic statistics in order to calculate Reference and Compensation vectors. After
that statistics from next two billing period will be collected for evaluation purposes.
Performance evaluation test will be started with daily envelope traffic pattern for both, DCDC traffic and background traffic generators.
Stage 3 – Performance evaluation test for 95th percentile based tariff
Performance evaluation test for 95th percentile based tariff will be performed with usage of
KPIs. Similarly as for performance evaluation test for volume based tariff, one week billing
period will be used. The measurement will be done during two billing period after the initial
belling period. DC-DC and background traffic profile with daily envelope pattern will be
used.

Page 22 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

3.1.2 Evaluation of multi-domain traffic cost reduction in DTM: M-to-M case
The goal of this experiment is to evaluate DTM functionality and performance in a more
complex network topology with multiple DCs serving as traffic sources and receivers, and
multiple cooperating ISP running DTM. The use-case considered is again "Bulk data
transfer for cloud operators”.
The logical topology for this experiment is presented in Figure 7. There are two multihomed domains performing traffic management with DTM: AS1 and AS4. They host DCs
serving as receivers of manageable traffic: DC-A and DC-C, respectively. There are also
two domains, AS3 and AS5, which host data centers being traffic sources. There are 8
tunnels established in total. The reference vector and compensation vector calculated at
AS1 is sent to AS3 and AS5. On the basis of their values AS3 chooses one of two tunnels
(tun BA1 or tun BA2) for flows to be sent from DC-B to DC-A. Similarly, in AS5 one of two
tunnels (tun DA1 or tun DA2) is chosen for traffic originated at DC-D and designated to
DC-A. Analogously, AS4 performs traffic management and sends reference and
compensation vectors to AS3 and AS5 to manage traffic received by DC-C but generated
by DC-B or DC-D, respectively.
Cloud A
DC-A

Traffic
receiver RA1

BG-1.1

Traffic
generator
(sender) GA1

Cloud B
DC-B

AS3
DA-1

Traffic
receiver

AS2
BG-1.2

AS1

Traffic
receiver RA2

S-Box

Cloud C
DC-C

DC traffic
generator
sender

Traffic
generator
(sender) GC1

SDN controller

S-Box

Traffic
Receiver RC1
Traffic
generator
(sender) GA2

Cloud D
DC-D
AS5

DA-4

DC traffic
generator
sender

BG-4.1

Traffic
receiver

AS4
BG-4.2
Traffic
receiver RC2

Traffic
generator
(sender) GC2

SDN controller

S-Box

S-Box

Figure 7 Logical topology for M-to-M experiment.

Deployment infrastructure
The deployment infrastructure for M-to-M experiment is presented in Figure 8. Addressing
scheme is presented in Table 6. Description of all virtual machines can be found
in Table 7.

Version 1.0

Page 23 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 8 Deployment of virtual machines on three physical servers in test-bed
environment.

Page 24 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 6 Detailed IP Address table for the production network for the DTM evaluation.
IP Address or Address Range

Usage

10.0.1.0/24

Interconnection ISP1-ISP2

10.0.2.0/24

Interconnection ISP1-ISP2

10.0.3.0/24

Interconnection ISP4-ISP2

10.0.4.0/24

Interconnection ISP4-ISP2

10.1.1.0/24

Interconnection ISP2-ISP3

10.1.12.0/24

Interconnection ISP2-ISP5

10.10.1.0/24

ISP1

10.10.2.0/24

ISP1

10.10.3.0/24

ISP1

10.1.2.0/24

ISP2

10.1.5.0/24

ISP2

10.1.6.0/24

ISP2

10.1.3.0/24

ISP3

10.1.4.0/24

ISP3

10.10.7.0/24

ISP4

10.10.9.0/24

ISP4

10.10.10.0/24

ISP4

10.1.10.0/24

ISP5

10.1.11.0/24

ISP5

Table 7 Hosts and the services running on them for DTM.
Host

Services

Comment

dtm-isp1-rtr-bg1

ISP router, interconnection to AS2 (link LA1),
BG-1.1

Vyatta software router

dtm-isp1-rtr-bg2

ISP router, interconnection to AS2 (link LA2),
BG-1.2

Vyatta software router

dtm-isp1-rtr-da

ISP router allowing connection with DC and
S-Box, DA-1

Vyatta software router

dtm-isp1-vmdc

Data Center receiving inter-domain traffic:
DC-A

dtm-isp1-vmsbox

S-Box

dtm-isp1-vmtr1

Receiver of background traffic passing interdomain link LA1

dtm-isp1-vmtr2

Receiver of background traffic passing interdomain link LA2

dtm-isp2-rtr-bg1

ISP router, interconnection to AS1, AS3 and
AS4

Vyatta software router

Version 1.0

Page 25 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

dtm-isp2-rtr-bg2

ISP router, interconnection to AS1, AS4 and
AS5

dtm-isp2-vmtg1

Generator of background traffic passing interdomain link LA1 and LC1

dtm-isp2-vmtg2

Generator of background traffic passing interdomain link LA2 and LC2

dtm-isp3-rtr-bg1

ISP router, interconnection to AS2

dtm-isp3-ofda

OVS connected to SDN controller

dtm-isp3-vmdc

DC generating inter-domain traffic: DC-B

dtm-isp3-vmsbox

S-Box

dtm-isp3-sdn

SDN Controller

dtm-isp4-rtr-bg1

ISP router, interconnection to AS2 (link LC1),
BG-4.1

Vyatta software router

dtm-isp4-rtr-bg2

ISP router, interconnection to AS2 (link LC2),
BG-4.2

Vyatta software router

dtm-isp4-rtr-da

ISP router allowing connection with DCand
S-Box, DA-4

Vyatta software router

dtm-isp4-vmdc

Data Center receiving inter-domain traffic:
DC-C

dtm-isp4-vmsbox

S-Box

dtm-isp4-vmtr1

Receiver of background traffic passing interdomain link LC1

dtm-isp4-vmtr2

Receiver of background traffic passing interdomain link LC2

dtm-isp5-rtr-bg1

ISP router, interconnection to AS2

dtm-isp5-ofda

OVS connected to SDN controller

dtm-isp5-vmdc

DC generating inter-domain traffic: DC-D

dtm-isp5-vmsbox

S-Box

dtm-isp5-sdn

SDN Controller

Vyatta software router

Vyatta software router

Vyatta software router

Parameters, Measurement and Metrics
Measurement points and sets of measured metrics are the same as for S-to-S scenario.
Also same applications for measurement, data collection, calculation of KPIs will be used.
The only difference is that they are defined for two domains that manages the inbound
traffic, AS1 and AS4, instead of a single domain. Measurement points and measured
values for experiments with volume based tariff and fo 95th percentile tariff are juxtaposed
in Table 8 and Table 9, respectively. Performance metrics and KPI are also observed
separately for AS1 and AS4, but sets of metrics and KPIs are the same in each domain.
They are presented in Table 10.

Page 26 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 8 Measurement points and measured values: experiment with volume based tariff.
Domain

AS1

AS4

Measured value

Measurement
point

Notation

Frequency

AS1 border
Temporary values of routers:
total traffic on interBG-1.1 and BGdomain links in AS1
1.2 𝑉 𝑉 𝑥

1,𝑖
and 𝑥2,𝑖

Δ𝑡 = 30 𝑠

Temporary values of
manageable traffic in DA-1 router at AS1
AS1 𝑉 𝑉 𝑧

1,𝑖
and 𝑧2,𝑖

Δ𝑡 = 30 𝑠

AS1 border
Achieved values of routers:
total traffic on interBG-1.1 and BGdomain links in AS1
1.2 𝑋

1𝑉 and 𝑋2𝑉

End of billing
period: 𝑇

Achieved values of
manageable traffic in DA-1 router at AS1
AS1 𝑍

1𝑉 and 𝑍2𝑉

End of billing
period: 𝑇

Compensation vector
s-Box at AS1
in AS1 𝐶

Δ𝑡 = 30 𝑠

Reference vector in
s-Box at AS1
AS1 𝑅

End of billing
period: 𝑇

AS4 border
Temporary values of routers:
total traffic on interBG-4.1 and BGdomain links in AS4
4.2 𝑉 𝑉 𝑥

1,𝑖
and 𝑥2,𝑖

Δ𝑡 = 30 𝑠

Temporary values of
manageable traffic in DA-4 router at AS4
AS4 𝑉 𝑉 𝑧

1,𝑖
and 𝑧2,𝑖

Δ𝑡 = 30 𝑠

AS4 border
Achieved values of routers:
total traffic on interBG-4.1 and BGdomain links in AS4
4.2 𝑋

1𝑉 and 𝑋2𝑉

End of billing
period: 𝑇

Achieved values of
manageable traffic in DA-4 router at AS4
AS4 𝑍

1𝑉 and 𝑍2𝑉

End of billing
period: 𝑇

Compensation vector
s-Box at AS4
in AS4 𝐶

Δ𝑡 = 30 𝑠

Reference vector in
s-Box at AS4
AS4 𝑅

End of billing
period: 𝑇

Version 1.0

Page 27 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 9 Measurement points and measured values: experiment with 95th percentile based
tariff.
Domain

Measured value

Measurement
point

AS1 border
5-minute samples of routers:
total traffic on interBG-1.1 and BGdomain links
1.2
Share
of
manageable traffic DA-1 router at AS1
in 5-minute samples

AS1

Sets of samples on
each inter-domain
link by the end of
billing period and
the size of sample
used for billing

AS1 border
routers:
BG-1.1 and BG1.2

Sets of samples of
manageable traffic
DA-1 router at AS1
by the end of billing
period

Frequency

Elements 𝑖 of
sets: 𝑡𝑠

= 5 min 𝐴 𝒳

1,𝑖

and 𝐴 𝒳

2,𝑖

Elements 𝑖 of
sets: 𝐴 𝒵

1,𝑖

and 𝐴 𝒵

2,𝑖

Sets 𝒳1𝐴 and 𝒳
2𝐴
and
samples 𝑋195
and 𝑋295 𝑡𝑠

= 5 min

End of billing
period: 𝑇 𝒵

1𝐴 and 𝒵2𝐴

End of billing
period: 𝑇

Compensation
vector

s-Box at AS1 𝐶

Δ𝑡 = 30 𝑠

Reference vector

s-Box at AS1 𝑅

End of billing
period: 𝑇

AS4 border
5-minute samples of routers:
total traffic on interBG-4.1 and BGdomain links
4.2
Share
of
manageable traffic DA-4 router at AS4
in 5-minute samples
AS4

Notation

Sets of samples on
each inter-domain
link by the end of
billing period and
the size of sample
used for billing

AS4 border
routers:
BG-4.1 and BG4.2

Sets of samples of
manageable traffic DA-4 router at AS4
by the end of billing

Elements 𝑖 of
sets: 𝐴 𝒳

1,𝑖

and 𝐴 𝒳

2,𝑖

Elements 𝑖 of
sets: 𝐴 𝒵

1,𝑖

and 𝐴 𝒵

2,𝑖

Sets 𝒳1𝐴 and 𝒳
2𝐴
and
samples 𝑋195
and 𝑋295 𝒵
1𝐴 and 𝒵2𝐴

Page 28 of 66 𝑡𝑠

= 5 min 𝑡𝑠

= 5 min

End of billing
period: 𝑇

End of billing
period: 𝑇

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

period
Compensation
vector

s-Box at AS4 𝐶

Δ𝑡 = 30 𝑠

Reference vector

s-Box at AS4 𝑅

End of billing
period: 𝑇

Table 10 Performance metrics and KPIs for AS1 and AS4. 𝐷
, 𝐷1 and 𝐷2

Total cost with DTM

Cost expected without DTM if link 1 was a
(1)
(1) 𝐷
(1) , 𝐷1 , 𝐷2 , Δ𝐷(1)
default path, and absolute benefit
Cost expected without DTM if link 1 was a
(2)
(2) 𝐷
(2) , 𝐷1 , 𝐷2 , Δ𝐷(2)
default path
Cost estimated during the billing period

̂𝑖 , 𝐷
̂1,𝑖 and 𝐷
̂2,𝑖 𝐷

Relative gain of using DTM

ξ(1) and ξ(2)

Ration of the cost achieved to the cost
expected according to reference vector
(accuracy of optimization) 𝜌

= 𝐷 𝑫𝑅

Test procedures
As for a single-to-single case the three stages for multi-to-multi case are proposed:
stage 1 – functionality test, stage 2 – performance test with volume based tariff,
stage 3 – performance tests with 95th percentile based tariff. Description of each of them
is an extension of text regarding the single-to-single test procedure.
Stage 1 – Functionality test
The main goal of functionality test is a basic evaluation of DTM mechanism in multi-tomulti experiment configuration. As for single-to-single experiment, billing period will be
setup to one hour. Since more complex topology will be used, more traffic generators and
receivers will be utilized. Each traffic generator (background and DC) will be configured to
generate flat envelope traffic pattern. The observation of DTM procedures will be done
during two billing periods. In order to validate DTM mechanism some bursts of background
traffic affecting both receiving ISP will be injected. Functionality test will be performed for
volume and 95th percentile based tariff.
Stage 2 – Performance evaluation test for volume based tariff
Performance evaluation test for volume based tariff will be performed in order to calculate
KPIs, but calculated separately for two receiving domains (AS1 and AS4). Test setup is
the same as for single-to-single scenario (billing period – one week, usable observation
time – 2 billing periods, traffic envelope – daily profile).
Stage 3 – Performance evaluation test for 95th percentile based tariff
Performance evaluation test for 95th percentile based tariff will be performed with usage of
performance metrics and KPIs, but calculated separately for two receiving domains (AS1
Version 1.0

Page 29 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

and AS4). Test setup is the same as for single-to-single scenario, i.e., billing period – one
week, usable observation time – 2 billing periods, traffic envelope – daily profile.

3.2 EFS Experiments
The end-user focused (EFS) scenario aims at providing increased QoE and energy
efficiency for the end-users. In particular, this goal is achieved by applying in-network
optimization strategies, such as content caching and prefetching, in a socially and energyaware fashion, while taking ISPs’ and application providers’ interests into account. In this
context, RB-HORST mechanism has been designed and implemented to support the
aforementioned functionalities. It considers a direct involvement of end-users and its
device’s resources in the service delivery chain and is based on the concept of the userowned nano datacenter (uNaDa).
The EFS experiments will use the released and integrated prototype of RB-HORST, to
validate its set of functionalities, as well as evaluate its performance and benefits to all
involved stakeholders: end-users, ISPs, and service providers. For each experiment,
certain goals and metrics are being defined, aiming to quantify the performance of specific
RB-HORST components, while the deployment infrastructure and all test procedures, both
functional and performance, are also provided, so that the people running the experiments
will have all the necessary information to execute them. The list of EFS experiments is
briefly described below:

The caching experiment (Section 3.2.1) aims at validating and evaluating the
performance of RB-HORST mechanism’s caching and proxying functionality in a
test-bed environment.

The large-scale study (Section 3.2.2), will test the RB-HORST platform in a realworld environment with real users, and will extract all the required measurements to
evaluate the performance of the social and overlay prediction algorithms, and what
are the benefits from content prefetching for end-users and ISPs.

Finally, the mobile data offloading experiment (Section 3.2.3) will monitor the
energy consumption of uNaDas and smartphones, and evaluate the bandwidth and
energy consumption savings of WiFi offloading under realistic bandwidth conditions.

Figure 9 shows the basic topology of the caching and mobile data offloading experiments.
It consists of 3 NSP domains, 2 access (AS1 and AS3) and 1 transit one (AS2), and 3
users (Andri, Sergios and George), each one having Internet access through their
respective ISP. The transit NSP provides access to the rest of the Internet, e.g. Facebook
and Vimeo servers.
In each user’s premises there is a uNaDa, which is a Raspberry Pi hosting the RB-HORST
software. Each uNaDa is assigned to its owner with his Facebook credentials and provides
2 SSIDs, one open but with no Internet access, and one private with full Internet access. In
addition, each user owns an Android smartphone to access the Internet, with RB-HORST
Android application and at least a web browser, installed. Of course, depending on the
experiment, we could have multiple users, with their respective Android smartphones,
accessing and connecting to the uNaDas.

Page 30 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 9 Basic Topology of EFS experiments.

Figure 10 shows the mapping of the EFS basic topology to the actual test-bed. As
indicated in the figure, PC/ISP1, 2 and 3 map to ASes 2, 1 and 3 respectively, meaning
that ISP1 is the transit domain, and ISPs 2 and 3 are the access ones.

Version 1.0

Page 31 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 10: Mapping of the EFS basic topology to the SmartenIT test-bed

Page 32 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

In addition, the following tables also provide the IP addresses and the services used to run
the EFS experiments.
Table 11 Detailed IP Address table for the production network for the EFS experiments.
IP Address or Address Range

Usage

10.201.0.0/18

Interconnection

10.201.50.0/30

Interconnection ISP1-ISP2

10.201.50.4/30

Interconnection ISP2-ISP3

10.201.50.8/30

Interconnection ISP1-ISP3

10.201.64.0/18

ISP1

10.201.100.0/27

ISP1 core network

10.201.128.0/18

ISP2

10.201.150.0/27

ISP2 core network

10.201.191.0/24

ISP2 RB-HORST network

10.201.192.0/18

ISP3

10.201.200.0/27

ISP3 core network

10.201.255.0/24

ISP3 RB-HORST network

Table 12 Hosts and the services running on them for EFS experiments.
Host

Services

isp1-rtr1

Uplink to Internet, Whois Proxy

isp1-rtr2

ISP router, interconnection to ISP 2 and ISP 3

isp2-rtr1

ISP router, interconnection to ISP1

isp2-un1

Hardware uNaDa located at ISP2

isp3-rtr1

ISP router, interconnection to ISP1

isp3-un1

Hardware uNaDa located at ISP3

isp3-un2

Headless, software uNaDa located at ISP3

Comment

Hosts
the
HORST
RBH_Secured SSIDs.

and

Hosts the HORST-DEMO and
RBH_Secured_DEMO SSIDs.

3.2.1 Evaluation of caching functionality in RB-HORST
The goal of this experiment is to test the basic caching functionality of RB-HORST and to
evaluate the cache performance. In order to make sure that the implemented prototype is
capable of caching, proxy functionality to intercept video requests and the capability to
store/serve content to/from the cache of the home router have to be asserted. The
performance evaluation of the RB-HORST cache will quantify the performance depending
on the content request rate, the content request strategy, cache size, and the number of
end devices. The results will be analyzed in terms of bandwidth utilization (saved traffic)
and energy consumption, and mapped to subjective QoE.

Version 1.0

Page 33 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Deployment infrastructure
This specific experiment focuses on access domain AS1 (ISP3), aiming to evaluate the
caching capabilities of a single uNaDa. For this purpose, several Android smartphones
with the RB-HORST Android application and a web browser installed, and the uNaDa
hosting the RB-HORST software, are required. Figure 11 presents the test-bed segment
that is required to run the experiment.

Figure 11 Test-bed segment for evaluation of caching functionality.
Parameters, Measurements and Metrics
Parameters:
A reference set-up will be used to assess the impact of each parameter. This means, one
parameter is varied per test and the reference values are used for the other parameters.

Cache size: 0MB (no caching), 128MB, 256MB, 512MB, 1GB (reference will be
selected after cache size performance study)

Number of end devices: 1 (reference), 2, 4, 8 devices

Video request rate: 1/16min, 1/8min (reference), 1/4min, 1/2min, 1/1min, 1/0.5min

Request generator: same video, random video (100 videos, uniform distribution),
catalogue (reference, 100 videos, Zipf-distributed probability), avg. video length:
3min

Page 34 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Measurements (to be conducted on end user device and/or uNaDa):

Content request time (request sent on end user device, request arrived at uNaDa)

uNaDa action upon request (cache hit/miss)

Content serve time (content sent from uNaDa, content arrived at end user device)

Up-/downlink traffic traces measured at uNaDa (end user device - uNaDa, uNaDa Vimeo)

Energy consumption (end user device, uNaDa)

Metrics (computed from measured data):

Cache hit rate of uNaDa = #(requests served from cache) / #(requests arrived)

Requests served by uNaDa = #(content serve time) [compare to video request rate]

Bandwidth utilization from traffic traces (download bandwidth at end user device
[uNaDa QoS], amount of traffic to Vimeo [inter-domain traffic saved])

Energy consumption

QoE (compute stalling events from download bandwidth and video bitrate)

Test procedures
The following tests are defined to validate functionality and estimate performance.
Functionality tests
The functionality tests ensure that the caching functionality works as expected. Therefore,
home routers and end devices must be set up and the home router has to be registered in
the overlay. It must be tested that content requests are sent from the end device and
content can be consumed. The consumed content has to be cached on the home router
and a subsequent request to that content must be served from the cache of the home
router.
Figure 12 shows the topology of the caching functionality tests. A uNaDa is located in the
AS of an access provider. The AS is connected to the Internet via AS2 which might be a
transit provider. In the Internet, video content can be accessed from its providers.
The test is set up as follows: Google Nexus 5 smartphones have RB-HORST installed.
uNaDa A has Internet connection and SSID HORST A_AP. Users A and C watch common
videos and are connected to via WiFi to A_AP.
In the functionality test reference scenario both users A and C downloaded the same video
as usual without RB-HORST functionality.
With the use of RB-HORST in the uNaDa a video downloaded by user A is cached on the
uNaDa. User C downloads the same video. The request is intercepted by the uNaDa and
served by the uNaDa cache.

Version 1.0

Page 35 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 12 Topology of caching functionality tests.
Performance tests
The performance tests quantify the impact of different parameters on the caching
functionality of RB-HORST. The reference parameters will be used apart from the
respective investigated parameter:

Performance study cache size: reference set-up but change cache size (will provide
a reference cache size for further performance tests)

Performance study number of end user devices: reference set-up but change
number of end devices

Performance study inter-arrival time of requests: reference set-up but change video
request rate

Performance study request strategies: reference set-up but change request strategy

3.2.2 Large-scale RB-HORST++ Study
The goal of the large-scale RB-HORST++ Study is to show that the social-aware
prefetching and Wifi offloading mechanisms are functional and they improve the perceived
network service for the end-user compared to conventional network access and simple
caching approaches. In contrast to caching, where content is kept in local network storage
after it has been downloaded once by a user, the social-aware prefetching mechanism
proactively downloads content that a local user is likely to watch in the future. Furthermore,
the operation of RB-HORST++ in a realistic usage environment and large number of
participants is demonstrated and evaluated.

Page 36 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

The study will be conducted by at least 40 participants at TUD, UZH, UniWue, and AUEB.
Additional nodes may be emulated in EmanicsLab (LXC containers) if needed. The goal of
the study is to have participants in at least 4 different locations throughout Europe.
For the basic prefetching functionality, the messaging overlay, the content prediction, and
the cache management has to be operable. Furthermore, for the mobile offloading
features of RB-HORST a large number of participants with realistic social connections are
required. The performance of prefetching has to consider distributed home routers over
multiple ASes and a social network between the users. The impact of the content request
rate, of the content request strategy, and the home router locations will be investigated.
The results will be analyzed in terms of prefetching efficiency, bandwidth utilization (saved
inter-domain traffic), energy consumption, and subjective QoE. The usage patterns
required by the mobile offloading features including the interaction of trusted and untrusted
users are investigated by project members and associated University students.
Deployment infrastructure
Access points are used as home routers running RB-HORST. Each of them includes WiFi
as well as a wired uplink port. The uplink port is used by the participants to connect the
devices to their home Internet access. Furthermore, the participants use their smartphones
as mobile devices.
Equipment set:

Home routers with WiFi access running RB-HORST

Internet access located in different ASes in at least 4 different European countries

More than 40 participants with their own mobile devices

ASes are connected to the public Internet

Parameters home router:

Hard disk size of home router: 16GB

Up- / downlink bandwidth of home router: Varying according to the connectivity
provided by the participants

CPU, RAM: 4-core CPU, 1GB RAM

Parameters end-device:

Android smart phone with the RB-HORST app installed

Parameters, Measurements and Metrics
The detailed metrics of a measured characteristic is given in brackets behind the name of
the characteristic. The trigger for the measurement is given in square brackets. The trigger
can be the occurrence of an event or periodically. The list is grouped by measurement
points in the experiment setup.
Measurements on the home router:
The data is logged in text files on the home router. These files are uploaded to the data
collection server using HTTP and REST every hour.

Version 1.0

Page 37 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Connectivity to monitoring server (bidirectional throughput, traceroute, RTT)
[once after system setup]

Home router device status (CPU usage, network interface counters, available
space on local storage) [periodically 1/s]

Data offload (date, time, duration, transferred volume) [Event]

Details on the RB-HORST operations on the device

Dump friend list of home router owner [periodically 1/day]

Video Request Events (time, video ID, title, length, size, source,
download time) [Event]

Video Prefetching Events (time, video ID, title, length, size, source,
download time) [Event]

Video Serving Events (time, video ID, title, length, size, source, download
time) [Event]

Predictions(raw data, predicted ranking) [Event]

Overlay neighbors (IP address, user) in fixed time intervals e.g. every
hour [Event]

Cache hits (time content, time in cache) [Event]

Cache delete events (time content) [Event]

Raw traceroutes to other RB-HORST instances (IP addresses of hops)
[Event]

Measurements on the Mobile App:
The data is logged into a database and uploaded to the data collection server using HTTP
and REST every hour.

Connection status to private RB-HORST WiFi (duration) [periodically 1/s]

WiFi on own home router [on user request, Event: connection change from
Cellular]

Connection to home server (bidirectional throughput, round-trip time,
signal strength)

Connection to monitoring server (bidirectional throughput, traceroute,
round-trip time)

Cellular [Event: connection change from WiFi]

Connected mobile network (cell Id, signal strength, wireless technology,
operator)

Page 38 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence


Connection to monitoring server (bidirectional throughput, bidirectional
traceroute, round-trip time)

Mobile device status [periodically 1/s]

CPU utilization

Network Interface counters

Screen status

Power status (battery level, voltage, current)

Metrics:
The following metrics are calculated based on the data collected during the experiments.
The calculations are conducted after the experiment on data collection server.

Energy consumption of home router

Based on energy model for device and the measured device

Prefetching efficiency = #(prefetch time)/#(content serve time)

Cache hit rate = #(content serve time) / #(content request time)

Requests served = #(content serve time)

Bandwidth utilization from traffic traces

Inter-domain traffic produced by prefetching

Inter-domain traffic saved

Test procedures
The test procedures are conducted by the participants under supervision. Each home
router device is handed out to a group of at least two participants.
The home router is configured by one of the participants at home and connected to their
home network. This network connection is used for Internet connectivity. This participant
configures the home router to represent her/his Facebook identity in the RB-HORST
system. All participants install the RB-HORST application on their Android smartphone and
set it up with their credentials.
Relying on the existing Facebook relation of the participants, they are encouraged to
interact, use the RB-HORST system by visiting each other’s location and post RB-HORST
compatible content on their Facebook wall. The participants trigger measurements on the
home router and on their smartphone. Finally, each participant fills out a survey on their
experience and information on their home network.
3.2.3 Evaluation of data offloading functionality in RB-HORST
The goal of this experiment is the comparison of the potential of bandwidth and energy
consumption savings of WiFi offloading under realistic bandwidth conditions as
experienced by the end-user at home or while moving. For that purpose, the measurement
of the probability that users offload at uNaDas of social contacts as well as the achievable
savings of overall bandwidth and energy is compared to a transmission via 3G/4G.
Version 1.0

Page 39 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 13: Architecture of the Energy Monitor and Analyzer.
Deployment infrastructure
The infrastructure required for the experiment consists of uNaDas, participating in the
overlay and being configured for the respective users. Furthermore, smartphones capable
of connecting to the cellular network via WiFi and to the local uNaDas are required. On the
smartphone, the RB-HORST Facebook App needs to be installed. The test-bed setup is
similar to the experiment described in Section 3.2.1. Additionally, a cellular connection is
required to allow accessing the video content independently of RB-HORST. Such, a
reference for the remaining tests is established. Similar to that experiment, content is
retrieved via the uNaDa, once directly from the server, and the second time from the local
cache. For these data transfers, the energy consumption is computed.
The energy monitoring and analyzer are integrated into the SmartenIT architecture. The
energy monitor estimates the power consumption of the uNaDa and transfers the
measurements to the energy analyzer. This aggregates the samples from multiple
uNaDas. The refined data is then used by the traffic manager to adapt the routing to the
current energy consumption of the participating devices. The architecture of the energy
monitors and analyzer is visualized in Figure 13.
The energy monitoring on the uNaDas is model based, meaning that the instantaneous
power consumption of the device is not measured directly, but derived using an energy
Page 40 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

model. This model is generated by simultaneously measuring the power consumption and
monitoring the system utilization of the underlying hardware. Using regression approaches
the power model is calculated.
The power model derived from the regression analysis is then used to convert system
utilization samples on each uNaDa to power estimates, which are sent to the energy
analyzer for aggregation. These models result in a low error (<5%) when applied to the
same device type.
This approach is also feasible for smartphones. Here, some additional effects have to be
considered, as the user interacts with the device and interfaces are only active for some
time. Still, on some devices (e.g. Nexus 5), it is possible to directly read the power
consumption using low level system calls or the power API.
Using the power estimates from the participating device, collected on the energy analyzer,
consequently allows deriving the cost of a data transmission between any two participating
devices. To analyze the energy cost of the RB-HORST mechanism, the power
consumption as recorded by the energy analyzer is correlated with the traffic caused by
RB-HORST to determine the overall cost of the mechanism. Further, it is also possible to
derive the cost of arbitrary data transfers and scheduling decisions of the mechanism.
For a derivation of the overall social mobile offloading potential, interactive experiments
with users in the context of the large-scale user study as described in Section 3.2.2 are
foreseen. Before the experiments, a survey is handed out to users to fill in basic data that
cannot be logged easily automatically, i.e., the name of the person deploying the uNaDa,
the location of the deployment (the address) and the nominal bandwidth of the DSL
connection as well as the ISP. Moreover, a number of technical parameters will be logged
by the RB-HORST uNaDa (social log) as well as the Mobile App (offloading log).
Parameters, Measurements and Metrics
To assess the energy cost of caching a video, the system utilization (i.e. network traffic on
each interface in each direction, CPU utilization) on the uNaDas participating in the
content distribution needs to be monitored. Based on the power model of the uNaDa, the
power consumption for receiving this video is derived.
The cost of transferring a cached video to the mobile device consists of the energy
consumption of the mobile device and the uNaDa using the respective power models.
This cost can then be compared with the cost of streaming a video directly from the server.
For this, an energy model of the server must be assumed, while the power draw of the
uNaDa and the smartphone can be calculated using the calibrated power models.
The parameters of the experiment are:

Size of the video

Number of peers (0:10:50)

SPS support as provided by SEConD (yes/no)

The required measurements are:

uNaDa
o Power

Version 1.0

Page 41 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

o Or:

Ethernet traffic in

Ethernet traffic out

WiFi traffic in

WiFi traffic out

CPU utilization

Smartphone
o Power
o Or:

CPU utilization

WiFi traffic in

WiFi traffic out

Cellular traffic in

Cellular traffic out

Display brightness

Other components

The metrics to be calculated are:

Cost for caching a video (J/MB)

Cost for transferring the video from cache to smartphone (J/MB)

Cost for streaming a video on the smartphone (J/MB)

To calculate the full cost of the mechanism, the derived measurements are then to be
combined with the efficiency of the prefetching algorithm, including the cost of needlessly
fetched videos. This is done in a post-processing step on the evaluation server.
For the derivation of the overall potential of social mobile offloading, two logs are
necessary: The social log (Table 13) is logged regularly once per day by RB-HORST on
the uNaDa. The purpose of this log is the dumping of the social connections of the owner
of the access point.
Table 13 The social log structure of RB-HORST
1. line

MD5(facebook_name_of_unada_owner),
offloading

2. line

MD5(unada_owners_friend_one)

3. line

MD5(unada_owners_friend_two)


n+1. line

SSID

used

for

social


MD5(unada_owners_friend_n)

Page 42 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Second, the offloading log (Table 14) contains all offloading events and is logged by the
App. The log is updated with a frequent sampling interval.

Table 14 The offloading log structure of RB-HORST
1. line

MD5(facebook_name_of_Smartphone_owner),
SSID
offloading took place, timestamp, offloaded volume

..

n+1. line

MD5(facebook_name_of_Smartphone_owner),
SSID
offloading took place, timestamp, offloaded volume

at

which

at

which

Both logs are pushed to a central measurement server regularly (once per hour). The MD5
hashes are necessary to maintain the privacy of users. The measured data is sufficient to
reconstruct the social graph and all offloading events.
Test procedures
Non-Interactive experiment for energy consumption:
1. Connect mobile phone to cellular network
2. Stream video via cell interfaces and measure
3. Connect to RB-Horst AP
4. Stream video via RB-Horst AP and measure
5. Stream/load cached video via RB-Horst AP and measure
6. Compare consumed energy for each option (2, 4, 5)
Interactive experiment for social mobile offloading:
1. Access point users are encouraged to use the RB-HORST system actively
2. The offloading events and social relations are logged to the social log and the
offloading log, respectively
3. The logs are pushed to the logging server in regular intervals
4. The results of this experiment are obtained by post processing the acquired logs
Other relevant Information
The smartphone model used for the energy related experiments should be the Nexus 5, as
it simplifies the measurement procedure. It allows measuring the battery current and
current draw, and hence the power consumption of the device is derived by multiplying
both. The highest accuracy of the measurements can be achieved using the RaspberryPi
as uNaDa, as calibrated power models are available.

Version 1.0

Page 43 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

4 Showcases
During the Year 2 Technical Review with the EC the project team demonstrated a running
pilot implementation and major functionalities. Showcases can be considered as
preliminary experiments aimed at showing the basic behaviour of SmartenIT mechanisms
in a test-bed environment.
The three following showcases have been designed to validate selected aspects of the
SmartenIT implementation and to present the benefits of particular mechanisms:

Multi-domain network traffic optimization in DTM – presentation of DTM’s
network traffic cost optimisation algorithm in a multi-domain network environment.

Locality, social awareness and WiFi offloading in RB-HORST – presentation of
major RB-HORST functionalities, i.e. WiFi offloading, overlay prediction and social
prediction.

Mobile Internet Access Offloading in EEF/RB-HORST – presentation of data
offloading functionality in RB-HORST with the Energy Efficiency Measurement
Framework (EEF).

4.1 Multi-domain network traffic optimization in DTM
In this section a showcase of the DTM pilot implementation is documented including
details required to set up and run the software in a test-bed network environment.
4.1.1 Scenario topology
DTM showcase test-bed comprises four logically isolated network domains as shown in
Figure 14. Inter-domain routing is configured by means of BGP protocol. Two GRE tunnels
are configured between routers located in AS3 and AS1. Each of them enters AS1 on
different inter-domain link. In AS3 a machine emulating data center that acts as the source
of inter-domain traffic is deployed. This traffic traverses one or two transit domains AS2
and AS4 to reach the receiving data center located in AS1. SmartenIT software prototype
v1.1 instances are deployed in AS3 (S-Box and SDN Controller) and AS1 (S-Box).
Additional software traffic generators and receivers are deployed inside the network to
emulate background traffic on links L1 and L2.
The DTM showcase logical test-bed topology is physically deployed on a set of virtual
machines distributed over 3 PCs as shown in Figure 15. In physical deployment routers in
AS2 and AS3 are represented by one virtual machine.
Table 15 lists the required extensions of the SmartenIT test-bed.
Table 15 Overview on the SmartenIT test-bed extensions used with DTM.
Scenario

4 ISPs
5.1.1 Traffic Generator

Required test-bed extensions as
defined in D4.1

5.2.2 VyOS Software Router
5.2.3 OpenFlow

Page 44 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Traffic
generator
(sender)

AS2

Cloud A
DC-A

Traffic
receiver

Cloud B
DC-B

AS3

DC traffic
generator
sender

Traffic
receiver

AS1
SDN controller
Traffic
receiver

S-Box

S-Box
Traffic
generator
(sender)

AS4
BGP router
Intra-domain router

Inter-domain link
Intra-domain link

Figure 14 DTM showcase test-bed logical topology
Table 16 and Table 17 present used IP address range and services deployed on a
particular VM.

Table 16 Detailed IP Address table for the production network for the DTM evaluation.
IP Address or Address Range

Usage

10.0.0.0/8

Interconnection

10.0.1.0/24

Interconnection ISP1-ISP2

10.0.2.0/24

Interconnection ISP1-ISP4

10.1.6.0/24

Interconnection ISP2-ISP4

10.1.1.0/30

Interconnection ISP2-ISP3

10.10.1.0/24

ISP1

10.10.2.0/24

ISP1

10.10.3.0/24

ISP1

10.1.5.0/24

ISP2

10.1.3.0/24

ISP3

10.1.4.0/24

ISP3

10.1.2.0/24

ISP4

Version 1.0

Page 45 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

eth3

PC 1

eth0
vmmgmt

pc1-swvmgmt

vmmonitor

OVS Switch: pc1-swcmgmt

mgmt-vmsbox

mgmt-vmtr

mgmt-vmdc

KVM: dtm-isp1vmtr2

KVM: dtm-isp1vmdc

eth1: 10.10.2.3

eth1: 10.10.3.2/24

mgmt-vmtr

KVM: dtm-isp1vmsbox

KVM: dtm-isp1vmtr1

eth1: 10.10.1.4/24

eth1: 10.10.1.3/24

OVS:
dtm-isp1-ofs3

mgmt-br
eth2:
KVM (Vyatta): 10.10.2.2
dtm-isp1-rtr/24
bg2

OVS:
dtmisp1ofs1

eth1:
10.0.2.12/24

p1p1.201

eth2:
10.10.2.1
/24

OVS:
dtmisp1ofs2

eth1:
10.10.1.1
/24

eth2:
10.10.1.2
/24

KVM (Vyatta):
dtm-isp1-rtrbg1

eth1:
10.0.1.12/24

KVM (Vyatta):
dtm-isp1-rtrda

p1p2.201

PC 3

pc2-swcmgmt

PC 2

mgmt-br

mgmt-da

eth3:10
.10.3.1
/24

pc3-swvmgmt

OVS Switch: pc3-swcmgmt

OVS Switch: pc2-swcmgmt

mgmt-vmtg

mgmt-vmdc

KVM: dtm-isp2vmtg1

KVM: dtm-isp3vmdc

eth1: 10.1.5.2/24

eth1:10.1.4.1/24

mgmt

KVM: dtm-isp4-vmtg2
OVS:
dtm-isp4ofs1

eth1:
10.1.2.2/24
mgmt-br

mgmt

eth2:
10.1.2.1
/24

KVM (Vyatta):
dtm-isp4-rtr-bg2

eth1:
10.0.2.2/24

eth3:
10.1.6.2/24

p1p1.201

p1p2.201

KVM (Vyatta):
dtm-isp2-rtrbg1
eth4:
10.1.6.1
/24

eth2:
10.1.5.1
/24
eth3:
10.1.1.1
/30
eth1:
10.0.1.2
/24

OVS:
dtmisp2ofs1
OVS:
dtmips3ofs3

mgmt-br
eth1:
10.1.1.2
/30

eth2:
10.1.3.1/24
eth2.10:
10.1.4.10/24
eth2.20: empty

KVM (Vyatta):
dtm-isp3-rtr-bg1

p1p2.201

p1p1.201

mgmt-vmsbox

KVM: dtm-isp3vmsbox-sdn
eth1: 10.1.3.2/24

OVS:
dtmips3ofs1
OVS:
dtmips3ofs2

eth2
eth3:
10.1.3.3
/24

KVM:
dtm-isp3ofsda

eth1.10
eth1.20

eth3

Figure 15 Mapping of DTM showcase logical topology to physical test-bed.

Page 46 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Table 17 Hosts and services running on them for DTM.
Host

Services

Comment

dtm-isp1-rtr-bg1

ISP router, interconnection to AS2

Vyatta software router

dtm-isp1-rtr-bg2

ISP router, interconnection to AS4

Vyatta software router

dtm-isp1-rtr-da

ISP router allowing connection with DC and
S-Box

Vyatta software router

dtm-isp1-vmdc

Data Center receiving inter-domain traffic

dtm-isp1-vmsbox

S-Box

dtm-isp1-vmtr1

Receiver of background traffic passing interdomain link AS2-AS1

dtm-isp1-vmtr2

Receiver of background traffic passing interdomain link AS4-AS1

dtm-isp2-rtr-bg1

ISP router, interconnection to AS1, AS3 and
AS4

dtm-isp2-vmtg1

Generator of background traffic passing interdomain link AS2-AS1

dtm-isp3-rtr-bg

ISP router, interconnection to AS2

dtm-isp3-ofda

OVS connected to SDN controller

dtm-isp3-vmdc

DC generating inter-domain traffic

dtm-isp3-vmsbox-sdn

S-Box and SDN controller VM

dtm-isp4-rtr-bg2

ISP router, interconnection to AS1 and AS2

dtm-isp4-vmtg2

Generator of background traffic passing interdomain link AS4-AS1

Vyatta software router

Vyatta software router

Vyatta software router

4.1.2 Scenario assumptions
In order to deploy DTM and enable it to perform efficient inter-domain traffic optimisation a
set of assumptions have to be met.
Network traffic managed by DTM is caused by data being exchanged between cloud
resources located in more than one data centers running in distant network domains. Data
traffic receiving domain needs to have two inter-domain links between which inbound
traffic can be dynamically distributed. For the showcase purposes traffic originating from a
data center is emulated using a custom software application which allows generation of
traffic patterns similar to the ones observed between data centers (i.e. with large number
of parallel flows). Additional traffic generators deployed within the test-bed generate the
background, non-manageable traffic on the inter-domain links.
Before the showcase is run, the test-bed is completely configured and operational. Traffic
generator instances (both senders and receivers) are running and traffic is being sent
within the test-bed. S-Box instances and the SDN Controller are correctly configured and
started beforehand.

Version 1.0

Page 47 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

4.1.3 Reference scenario
Reference scenario assumes a bulk data transfer between data centers placed in AS1 and
AS3 (Figure 14) without implementation of DTM in the network. In such a case only one of
AS1’s inter-domain links is always selected by BGP to transfer data between DCs. The
cost of traffic received on a particular link depends on the total volume of the traffic
received during a billing period. Specific traffic cost functions are assumed on both of the
links. Since many parameters of BGP may affect the link selection, calculation of cost for
both cases (selection of each link) is considered.
Knowing the background and manageable traffic volume (measured during scenario with
DTM) cost incurred by the AS1 without DTM enabled can be easily calculated.
4.1.4 Showcase scenario
The goal of the showcase is to present the benefit of running DTM inside the network
which allows for costs reduction resulting from optimal inbound traffic disstribution among
the two inter-domain links.
For the real-time showcase purposes the billing period was set up to 30 minutes (instead
of the typical 1 month). During consecutive accounting periods it can be observed how the
traffic is distributed over the links and how this distribution corresponds to the optimal
distribution represented by the DTM reference vectors values. The estimated benefits of
using DTM are presented during the entire accounting period and the real benefit gained in
the current accounting period is known at the end of the period.
During the showcase a custom visualization application was used. The application view
screenshot presented in the Figure 16 consists of 5 real-time plots and one static. Starting
from the upper row two first diagrams from the left present total and background traffic
volume passing particular inter-domain link: link 1 (AS1-AS2) and link 2 (AS1-AS4),
respectively. Last chart presents the real-time estimation of traffic cost in three cases:
when DTM is used, and when DTM is disabled and: i) DC to DC traffic passes entirely
through link 1 or ii) DC to DC traffic passes entirely through link 2 (depending on default
route selected by BGP). As shown in Figure 17 the gap between cost lines with and
without DTM at the end of billing period represents the achieved benefit (cost savings).
In the bottom row of the Figure 16 diagram on the left presents the real-time distribution of
traffic between inter-domain links (in all 3 previously mentioned cases) and calculated
reference vector. Middle graph is a static representation of the left diagram captured at the
end of the previous accounting period. Right chart presents in real-time the current value
of compensation vector for link 1 (compensation vector for the second link has the same
value but an opposite sign). Positive value of the compensation vector means that in this
particular moment new flows generated by DC should be sent via tunnel 1 (traversing link
1), negative value – tunnel 2 (traversing link 2) should be selected.

Page 48 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 16 View of the visualization application for DTM showcase.

Figure 17 Graph from the visualization application presenting real-time cost estimation.
During the showcase presentation some burst of background traffic on link 2 was
generated (highlighted in the Figure 18 with red circle). As could be observed such a
sudden disruption in traffic distribution caused that any new flow from DC (manageable
traffic) was redirected to tunnel passing link 1 (high difference between total and
background traffic on link 1 in the top graph of Figure 18). In the bottom graph of the
Figure 18 a deviation from the reference vector can be easily observed, however the
execution of DTM compensation procedure ensured that the reference vector was shortly
met.

Version 1.0

Page 49 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 18 Selected graphs from the visualization application with highlight on traffic burst
compensation.

The DTM showcase clearly presents the benefits of implementing the DTM inside the
network in terms of inter-domain traffic costs reduction.

Page 50 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

4.2 Locality, social awareness and WiFi offloading in RB-HORST
In this section a showcase of the RB-HORST pilot implementation is documented including
details required to set up and run the software in a test-bed network environment.
4.2.1 Scenario topology
Figure 19 shows the topology of the RB-HORST showcase, which is the same as the
basic topology and mapping to the actual test-bed, as described in EFS experiments
(Figure 9 included in Section 3.2.1). It consists of 3 NSP domains, 2 access (AS1 and
AS3) and 1 transit NSP (AS2), and 3 users, Andri, Sergios and George, each one having
Internet access through their respective ISP. This is similar to a real-world scenario, in
which Andri is located in Zurich and has a contract with a local ISP, e.g. Swisscom, while
Sergios and George are located in different suburbs of Athens, but have a contract with
the same ISP, e.g. Wind Telecom. These 2 ISPs are connected through a transit NSP and
also provide access to the rest of the Internet, e.g. Facebook and Vimeo servers.

Figure 19 Topology of prefetching and social awareness showcase
In each user’s premises there is a uNaDa, which is a Raspberry Pi hosting the RB-HORST
software. Each uNaDa is assigned to its owner with his Facebook credentials and provides
2 SSIDs, one open but with no Internet access, and one private with full Internet access. In
addition, each user owns a smartphone to access the Internet.
The RB-HORST showcase logical test-bed topology is physically deployed on a set of
virtual machines distributed over 3 PCs as shown in Figure 10 (the test-bed topology and
network configurations are presented in Section 3.2.1 as they are the same for the RBHORST showcase and the RB-HORST experiment). Table 18 lists the required extensions
Version 1.0

Page 51 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

of the SmartenIT test-bed; Table 11 and Table 12 present used IP address range and
services deployed on a particular VM.
Table 18 Overview on the SmartenIT test-bed extensions used with RB-HORST.
Scenario
Required test-bed extensions as defined
in D4.1

3 ISPs
5.3.1 Raspberry Pi

4.2.2 Scenario assumptions
The RB-HORST showcase execution has the following assumptions and required
configuration:

George and Andri own a Google Nexus 5 smartphone, with at least Android OS
v4.4.4, and HORST, Facebook and Firefox applications installed.

Andri’s and George’s uNaDas are Raspberry Pis, running the RB-HORST software
and offering access-point capabilities, while Sergios’ uNaDa is a virtual machine,
only hosting the RB-HORST software.

HORST* SSIDs are open, with no Internet access and are used for the
communication of the HORST Android application with the uNaDa.

RBH_Secured* SSIDs require authentication, and provide full Internet access.

Each user is the owner of his uNaDa, meaning that he has logged in to the RBHORST service with his Facebook credentials.

Andri and George are Facebook friends, and watch similar videos. This means that
their uNaDas’ caches share some common videos and are considered as overlay
neighbors. Thus in the next iteration of overlay prediction, their newly-cached
contents are likely to be prefetched to each other’s uNaDa.

Sergios is not a Facebook friend with the other 2 users, but belongs to the same
domain as George. He has watched some content in the past, which are cached
locally and his uNaDa participates in the uNaDas’ overlay network.

4.2.3 Reference scenario
In the reference scenario (Figure 20), George is connected to Andri’s private SSID and
browses the Internet. Finally, he watches a Vimeo video (“Italy - A 1 Minute Journey”),
which is not present in Andri’s uNaDa cache. The video is fetched from the Vimeo servers,
it buffers slowly during playback, resulting in low QoE.
However, the video is inserted to Andri’s uNaDa cache and can be served from there in
future requests.

Page 52 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 20: Reference scenario.
4.2.4 Showcase scenario
WiFi offloading
In the WiFi offloading showcase presented in Figure 21, George has visited Zurich and is
located close to Andri’s uNaDa. Currently, he cannot access the Internet, because he has
not enabled 3G Roaming to his device.
To gain Internet access, George opens the HORST Android application, which
transparently:

Connects to HORST SSID of Andri’s uNaDa,

Provides George’s Facebook ID, and,

Finally receives the private SSID (RBH_secured) credentials and connects to it.

In the meantime, Andri’s uNaDa has verified that George is one of Andri’s friends and can
be considered as a trusted user to connect to the private SSID and browse the Internet.
The outcomes of the showcase are presented in:

The HORST Android application, which shows the evolution of the authentication
process and finally, the connection to the private SSID, and,

The uNaDa web interface, which presents the authentication attempt by a user
different than Andri, and finally, the successful outcome.

Version 1.0

Page 53 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 21: WiFi offloading showcase.

Overlay prediction
The overlay prediction showcase, assumes that the reference scenario has been
completed, and a newly cached content (“Italy - A 1 Minute Journey”), has been stored in
Andri’s uNaDa cache. Now George returns home and connects to his home gateway and
all the following prediction showcases occur at his uNaDa.
As stated in the showcase assumption, Andri’s and George’s uNaDas are overlay
neighbors, which means that they already share some common videos, and their overlay
prediction algorithms will predict that newly-watched/cached videos should be prefetched.
Hence, the overlay prediction in George’s uNaDa predicts that the Vimeo video “Italy - A 1
Minute Journey” is likely to be watched again. Although the video exists in Andri’s uNaDa,
it is preferred to be fetched from the Vimeo CDN servers, because they are closer than
Andri’s uNaDa.
When George watches the video again, his video request is proxied by the uNaDa, and
the video is served from the uNaDa server, resulting in rapid video buffering and higher
QoE than the reference scenario.

Page 54 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 22: Overlay prediction showcase.
The results of the showcase are presented in:

George’s Firefox web browser, in which the video buffers instantly during playback,
and,

George’s uNaDa, which shows that the content is prefetched from Vimeo servers
instead of Andri’s uNaDa, and George’s request is intercepted and served by the
local uNaDa content server.

Social Prediction
Andri watches another video “Brussels in 1 minute” from his home and posts it to his
Facebook wall.
Social prediction in George’s uNaDa predicts that this video is likely to be watched by
George, and should be prefetched. This video has also been watched by Sergios and is
cached to Sergios’ uNaDa, which belongs in the same domain as George. Hence the
video is fetched from Sergios’ uNaDa, instead of Vimeo servers, resulting in inter-domain
traffic savings.
When George checks his Facebook News Feed, Andri’s Facebook post appears and
George tries to watch the posted Vimeo video. His uNaDa proxies his request and serves
the video from the local content server, resulting in better QoE than the reference
scenario.

Version 1.0

Page 55 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 23: Social prediction showcase.
The results of the social prediction showcase are presented in:

George’s smartphone, in which the video buffers instantly, and,

George’s uNaDa web interface, which shows the outcome of the social prediction
and the content source selection (Sergios’ uNaDa instead of Vimeo servers) and
the content delivery by the local uNaDa content server.

4.3 Mobile Internet Access Offloading in EEF/RB-HORST
This showcase relates to the evaluation of data offloading functionality in RB-HORST (and
RB-HORST++ which is an extension integrating MoNA, vINCENT, and SDN-DC) with the
Energy Efficiency Measurement Framework (EEF).
Motivation and Integration into SmartenIT
Today's overlay-based mobile cloud applications determine a challenge to operators and
cloud providers in terms of increasing traffic demands and energy costs. The energy
efficiency plays a major role as a key performance metric in both the OFS and EFS
scenario. Therefore, the SmartenIT consortium has defined energy efficiency as one of the
key targets for the design and optimization of traffic management mechanisms for overlay
networks and cloud-based applications.
This target is used as design goal for emerging proposals, e.g., to establish content
distribution systems minimizing energy consumption by using energy awareness of its
architecture elements into efficient structure of caches which minimizes volume of
transferred data. For example, the end user focused scenario has the goal of energy

Page 56 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

efficiency for end users by distributing and moving content and services in an energy
aware way, while taking provider's interests into account. Finding the right content
placement is an optimization problem to be solved in an energy efficient manner. In order
to optimize the placement with respect to energy efficiency, information on energy
consumption is needed. This data can be derived from energy models providing an energy
estimate of the placement. Energy models which estimate energy consumption from
network measurements are existing in literature. Moreover, modelling the energy efficiency
of placements allows also for the prediction of future energy consumptions. Also the
Operator Focused Scenario (OFS) has the goal of achieving highest operating efficiency in
terms of low energy consumption besides other optimization criteria. E.g. Cloud Federation
enables collaboration so as to achieve both individual and overall improvement of cost and
energy consumption. Moreover, data migration/re-location may often be imposed by the
need to reduce overall energy consumption within the federation by consolidating
processes and jobs to few DCs only. Therefore, the OFS scenario defines a series of
interesting problems to be addressed by SmartenIT, specifically energy efficiency for
DCOs, either individually or overall for all member of the federation. To this end, the EEF
demo showcases the SmartenIT energy framework. It consists of the Energy Analyzer
which offers energy consumption considerations, thereby achieving an energy-efficient
network management approach.
A precondition for optimizing energy consumption is the measurement of energy
consumption. Thus, a measurement platform for energy consumption based on validated
energy models is integrated into the RB-HORST showcase. The measurement of energy
consumption without the need of additional measurement hardware is demonstrated and
the measurements are visualized.
4.3.1 Scenario topology
The network configuration for the EEF demo is based on the multi-ISP scenario. Three
ISPs are used with a total of three uNaDas. ISP2 hosts one uNaDa, ISP3 hosts 2 uNaDas,
one of which is running on a Raspberry Pi, and one is running in a VM.

Figure 24: Topology and Scenario Assumptions.

Version 1.0

Page 57 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

The network topology and test-bed mapping is identical to the one used for the RBHORST showcase as described in Section 3.2.1.
4.3.2 Scenario assumptions
The experiment setup and scenario assumptions are depicted in Figure 24. In particular,
the following assumptions are made:

There are two Google Nexus 5 smart phones

RB HORST and EnergyMonitor is installed

Each smartphone is equipped with a tunnel using the 3G interface to the test-bed

A uNaDa with Internet connection is providing open WiFi access

RB HORST and EnergyMonitor installed

The cache of the RB-HORST is emptied before the demo.

Note: RB HORST is considered to be one possible application under test for this demo. It
is not argued that RB HORST interfaces with EEF. However, both RB HORST and DTM
can use the EEF framework to optimize for energy efficiency.
The energy monitor constantly measures and delivers system parameters to the Energy
Analyzer converting the measured data into energy estimates using validate models
measured by TUD for the SmartenIT project. The energy estimates are visualized together
with the topology shown above.
4.3.3 Reference scenario
The Mobile Internet Access Offloading in EEF/RB-HORST is compared to the case of
regular downloads using a 3G or 4G cellular connection. The metrics to be compared are
the download speed and the power consumption of the mobile device, combined with an
estimate of the backend power consumption.
The reference scenario is established as follows: The user downloads a video using the
3G connection. The video is delivered via 3G. At the same time, the energy consumption
of the smart phone is monitored and visualized. This is represented as the red connection
marked in Figure 25, not making use of the uNaDa.showcase scenario.
4.3.4 Showcase scenario
The showcase scenario is depicted in Figure 25.
Step 1 (Using WiFi offloading instead of 3G):
The user downloads the same video using the RB-HORST WiFi. While viewing the
video, it is cached in the local RB-HORST cache. At the same time, the energy
consumption of the smart phone and the uNaDa is monitored and visualized sideby-side. The smart phone shows a better energy efficiency than in the reference
case. At the same time, the uNaDa shows higher energy consumption as traffic on
the WiFi interface is generated.

Page 58 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Figure 25: Mobile Internet access offloading in EEF/RB-HORST showcase.

Step 2 (Using WiFi offloading and RB-HORST caching):
The user downloads the same video using the RB-HORST WiFi and the video is
delivered from the RB-HORST cache. At the same time, the energy consumption of
the smart phone and uNaDa is monitored and visualized side-by-side. A result
similar to step 2 is expected, where the RB-HORST access point shows better
energy efficiency.

Version 1.0

Page 59 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

5 Summary
The core of this document is a set of experiment definitions which will be used to drive the
prototype validation activities. As the project covers two complementary domains, the one
where an end-user has a key role and the second one dominated by an ISP and a cloud
service provider or a data center, the experiments represent respective characteristics.
The OFS (Operator Focused Scenario) experiments have been designed to validate the
prototype implementation of DTM. Two main experiment scenarios have been proposed to
prove the optimization benefit of DTM mechanism :

evaluation of multi-domain traffic cost reduction in DTM with data transfer between
two locations (distant Data Center/cloud resources),

evaluation of multi-domain traffic cost reduction in DTM with data transfer between
multiple locations (DCs/clouds serving as traffic sources and receivers).

The EFS (End-user Focused Scenario) experiments are focused on the implementation of
RB-HORST mechanism. They mainly validate optimization techniques for caching and
WiFi offloading with the use of social network information and energy efficiency
measurement. The following three EFS experiments have been designed:

evaluation of caching functionality in RB-HORST,

large-scale RB-HORST study,

evaluation of data offloading functionality in RB-HORST.

The second EFS experiment is especially interesting and promising because it will be
executed in cooperation with students who will be using the RB-HORST implementation.
Moreover, the deliverable D4.2 includes the descriptions of showcases organised during
the second year technical review with the EC. The SmartenIT project presented functions
of the pilot implementation in a test-bed prepared for the event. The work on showcases
was a direct input to the further efforts on experiment definitions.
Each experiment definition and showcase contains sufficient set of details to set up and
execute the software by the project team responsible for validation. In the next stage, the
results of experiments will be analysed to assess the SmartenIT solutions.

Page 60 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

6 SMART Objectives
Through this document, four SmartenIT SMART objectives defined in Section B1.1.2.4 of
the SmartenIT Description of Work (DoW) have been partially addressed. Namely, one
overall (O4, see Table 19) and three practical (O2.2, O3.1 and O3.4; see Table 20)
SMART objectives were addressed.
The overall Objective 4 is defined in the DoW as follows:
Objective 4 SmartenIT will evaluate use cases selected out of three real-world scenarios
(1) inter-cloud communication, (2) global service mobility, and (3) exploiting
social networks information (QoE and social awareness) by theoretical
simulations and on the basis of the prototype engineered.
This deliverable provides the definitions of experiments for OFS and EFS scenarios
(Section 3). Both scenarios were proposed as an outcome of the analysis and integration
process conducted in WP2. The OFS and EFS experiments include the characteristics of
all three scenarios listed in Objective 4. The initial evolutions of use cases with the
selected network traffic management solutions have been already performed during the
Year 2 Review meeting as the showcases (Section 4). Mainly they showed the available
functionalities. The advanced experiment definitions described in D4.2 provide required
details to run advanced functional and performance tests of the implemented SmartenIT
solutions.
Table 19: Overall SmartenIT SMART objective addressed.
Objective
No.
O4

Measurable
Specific

Deliverable
Number

Evaluation of use
cases

D4.1, D4.2, D4.3

Timely
Achievable

Relevant

Mile Stone
Number

Implementation,
evaluation

Complex

MS4.3

Table 20: Practical SmartenIT SMART objective addressed.
Measurable

Objective
ID

Specific

O2.2

Which parameter
settings are
reasonable in a given
scenario/application
for the designed
mechanisms to work
effectively?

Metric

Number of
parameters
identified, where a
reasonable value
range is specified

Timely
Achievable

Relevant

Design,
simulation,
prototyping
T2.2., T3.4,
T4.2

Highly relevant
output of
relevance for
providers and
users

Version 1.0

Project
Month

M24

Page 61 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Measurable

Objective
ID

O3.1

O3.4

Specific

Timely
Achievable

Relevant

Which techniques to
be used to retrieve
management
information from cloud
platforms and OSNs?

Number of studied
cloud providers,
number of identified
types of
management
information,
number of
compared retrieval
techniques, number
of studied OSNs,
number of identified
types of social
information or
meta-information
related to users’
social behaviour

Design
T1.1., T4.1,
T4.2

Highly relevant
output of
relevance for
providers

How to monitor
energy efficiency and
take appropriate
coordinated actions?

Number of options
identified to monitor
energy
consumption on
networking
elements and end
users mobile
devices,
investigation on
which options
perform best
(yes/no)

Design,
simulation,
prototyping
T1.3, T2.3,
T4.1, T4.2,
T4.4

Metric

Project
Month

M24

Highly relevant
output of

M3.6

relevance for
users

This deliverable contributes to answering three specific practical questions:
Objective 2.2: Which parameter settings are reasonable in
scenario/application for the designed mechanisms to work effectively?

a

given

Definitions of experiment in Section 3 provide information about parameters,
measurements and metrics which are required to take into account during
experiment executions. They have been selected in such a way to collect valuable
and accurate results. The right selection is needed to properly evaluate the
SmartenIT network traffic management solutions.
Objective 3.1: Which techniques to be used to retrieve management information
from cloud platforms and OSNs?
The RB-HORST experiments (EFS) obtain and use identities and social data from
Facebook. This concrete OSN has been selected to show and evaluate how such a
platform can support network traffic management and enhanced traditional
management approaches. Additionally, the OFS experiments present a network
traffic cost optimisation technique in a cloud-based inter-domain network
environment.

Page 62 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

Objective 3.4: How to monitor energy efficiency and take appropriate coordinated
actions?
The answer to the question about energy efficiency monitoring is provided in the
experiment which utilises The Energy Efficiency Measurement Framework (EEF). It
compares the energy consumption of WiFi and cell data transmissions under
realistic bandwidth conditions with the use of RB-HORST.

Version 1.0

Page 63 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

7 References
[1]

The SmartenIT Consortium, “Grant Agreement for STREP: Annex I – Description of
Work (DoW),” 2012.

[2]

T. Benson, A. Akella, D. A. Maltz. Network traffic characteristics of data centers in
the wild. 10th ACM SIGCOMM conference on Internet measurement (IMC '10).
ACM, New York, NY, USA, 2010.

[3]

T. Benson, A. Anand, A. Akella, M. Zhang. Understanding data center traffic
characteristics, SIGCOMM Computer Communications Review, Vol. 40, No. 1,
January 2010.

[4]

The SmartenIT project: Deliverable D4.1 – Test-bed Set-up and Configuration;
October 2014.

[5]

The SmartenIT project: Deliverable D2.4 - Report on Final Specifications of Traffic
Management Mechanisms and Evaluation Results; October 2014.

Page 64 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

8 Abbreviations
3G

Third Generation

AGH

Akademia Gorniczo-Hutnicza im. Stanislawa Staszica w
Krakowie

AS

Autonomous System

BGP

Border Gateway Protocol

CDN

Content Delivery Network

CPU

Central Processing Unit

DA

Data Center Attachment Point

DC

Data Center

DCO

Data Center Operator

DoW

Description of Work

DTM

Dynamic Traffic Management

EEF

The Energy Efficiency Measurement Framework

EFS

End-user-Focused Scenario

GRE

Generic Routing Encapsulation

HTTP

Hyper Text Transfer Protocol

ICOM

Intracom S.A. Telecom Solutions

IP

Internet Protocol

IRT

Interroute S.P.A

ISP

Internet Service Provider

KPI

Key Performance Indicator

MONA

Mobile Network Assistant

M-to-M

Multiple to Multiple

NSP

Network Service Provider

OFS

Operator-Focused Scenario

OSN

Online Social Network

OVS

Open vSwitch

PC

Personal Computer

PSNC

Instytut Chemii Bioorganiicznej PAN

QoE

Quality of Experience

QoS

Quality of Service

RAM

Random-Access Memory

REST

REpresentational State Transfer

Version 1.0

Page 65 of 66
© Copyright 2015, the Members of the SmartenIT Consortium

D4.2 - Experiments Definition and Set-up

Seventh Framework STREP No. 317846
Commercial in Confidence

RB-HORST

Replicating Balanced tracker - HOme Router Sharing based
On truST

RTT

Round Trip Time

S-to-S

Single to Single

SDN

Software Defined Networking

SEConD

Socially-aware Efficient Content Delivery

SMART

Specific Measurable Achievable Realistic And Timely

SNMP

Simple Network Management Protocol

SSID

Service Set Identifier

TUD

Technische Universität Darmstadt, Germany

UDP

User Datagram Protocol

uNaDa

User-owned NAno DAtacenter

UniWue

Julius-Maximilians Universität Würzburg

UZH

University of Zürich

WiFi

Wireless Fidelity

vINCENT

Virtual Incentives

VM

Virtual Machine

9 Acknowledgements
This deliverable was made possible due to the large and open help of the WP4 team of the
SmartenIT team within this STREP, which includes besides the deliverable authors as
indicated in the document control, Krzysztof Wajda (AGH), Gino Carrozzo (IRT) and
Burkhard Stiller (UZH) as well for providing valuable feedback and input.

Page 66 of 66

Version 1.0
© Copyright 2015, the Members of the SmartenIT Consortium