Sie sind auf Seite 1von 108

D1.

2 Unicorn Reference Architecture

Unicorn Reference Architecture


Deliverable D1.2

Editors
Athanasios Tryfonos
Demetris Trihinas
Reviewers
Julia Vuong (CAS)
Panos Gouvas (Ubitech)
Sotiris Koussouris (Suite5)
Date
29 September 2017
Classification
Public

1
D1.2 Unicorn Reference Architecture

Contributing Author # Version History


Name Partner Description

Table of Contents (ToC) and partner


Demetris Trihinas UCY 1 contribution assignment.

Athanasios Tryfonos UCY 2 Updated section 8 and subsections structure.

Unicorn System Requirements and User Role


Zacharias Georgiou UCY 3 Overview.

State-of-the art for Cloud Application Design


George Pallis UCY 4 and Management & Cloud Application
Security and Privacy

Added Reference Architecture diagrams and


Fenareti Lampathaki Suite5 5 description

Added Section 5 flow diagrams and updated


Sotiris Koussouris Suite5 6 architecture description

Updated architecture based on received


Spiros Koussouris Suite5 7 feedback. Added unicorn use-cases

Merged content regarding reference


Manos Papoutsakis FORTH 8 architecture. Merged content regarding state-
of-the-art

Updated introduction and added executed


Giannis Ledakis Ubitech 9 summary and conclusion

Merged content for demonstrators and


Panagiotis Gouvas Ubitech 10 implementation aspects

Merged content regarding architecture flows


Julia Vuong CAS 12 and new diagrams, updated introduction and
state-of-the-art

Merged updates regarding demonstrators,


Spiros Alexakis CAS 13 use-cases and introduction

Minor refinements on all document.


Erik Robertson Redikod 14 Document finalized for internal review.

2
D1.2 Unicorn Reference Architecture

15 Reviewer comments addressed

Inserted Table of Abbreviations and updated


16 Executive Summary

17 Final Version

3
D1.2 Unicorn Reference Architecture

Contents
Contents 4
1 Introduction 12
1.1 Document Purpose and Scope 13
1.2 Document Relationship with other Project Work Packages 13
1.3 Document Structure 14
2 State of the Art and Key Technology Axes 16
2.1 Micro-Service Application Development Paradigm 16
2.2 Cloud Application Design and Management 19
2.2.1 Cloud Application Portability, Interoperability and Management 19
2.2.2 Monitoring 20
2.2.3 Elastic Scaling 22
2.3 Cloud Application Security and Data Privacy Enforcement 23
2.3.1 Data privacy-by-design and encrypted persistency 23
2.3.2 Security and Data Restriction Policy Enforcement Mechanism 24
3.3.3 Risk and Vulnerability Assessment 25
2.4 Containerization and Cluster Orchestration 26
3 Unicorn System Requirements and User Role Overview 30
4 Unicorn Reference Architecture 33
4.1 Motivational Example 40
4.2 Cloud Application Development and Validation 41
4.3 Application Deployment 45
4.4 Monitoring and Elasticity 47
4.5 Security and Privacy Enforcement 49
5 Unicorn Use-Cases 53
6 Unicorn Demonstrators 79
6.1 Enterprise Social Data Analytics 79
6.1.1 Overview 79
6.1.2 Technical Implementation 80
6.1.3 Business and Technical Challenges 81
6.1.4 Demonstrator Relevance to Unicorn Use Cases 83
6.2 Encrypted Voice Communication Service over Programmable Infrastructure 84
6.2.1 Overview 84
6.2.2 Technical Implementation 84
6.2.3 Business and Technical Challenges 85

4
D1.2 Unicorn Reference Architecture

6.2.4 Demonstrator Relevance to Unicorn Use Cases 86


6.3 Prosocial Learning Digital Game 87
6.3.1 Overview 87
6.3.2 Technical Implementation 87
6.3.3 Business and Technical Challenges 88
6.3.4 Demonstrator Relevance to Unicorn Use Cases 89
6.4 Cyber-Forum Cloud Platform for Startups and SMEs 90
6.4.1 Overview 90
6.4.2 Technical Implementation 91
6.4.3 Business and Technical Challenges 94
6.4.4 Demonstrator Relevance to Unicorn Use Cases 95

7 Implementation Aspects of Reference Architecture 96


7.1 Version Control System 98
7.2 Continuous Integration 98
7.3 Quality Assurance 99
7.4 Release Planning and Artefact Management 99
7.5 Issue Tracking 99
8 Conclusions 101
9 References 102

5
D1.2 Unicorn Reference Architecture

List of Figures
Figure 1: Technologies that Unicorn relies on and will contribute to 12
Figure 2: Deliverable Relationship with other Tasks and Work Packages 14
Figure 3: Container-based Virtualization 27
Figure 4: Usage of Linux containerization toolkit by Docker 28
Figure 5: CoreOS Host and Relation to Docker Containers 28
Figure 6: Identified Unicorn Actors 31
Figure 7: Non-functional requirements 32
Figure 8: Unicorn Reference Architecture 33
Figure 9: Eclipse CHE High-Level Architecture 34
Figure 10: Unicorn Eclipse CHE Plugin Overview 36
Figure 11: High-Level Unicorn Orchestration 37
Figure 12: Tosca Topology Template 38
Figure 13: Unicorn Core Context Model Mapping 40
Figure 14: Content Streaming Cloud Application 41
Figure 15: Cloud Application Development and Validation 44
Figure 16: Application Deployment 46
Figure 17: Monitoring & Elasticity Flow 48
Figure 18: Security Enforcement 50
Figure 19: Privacy Enforcement 51
Figure 20: Unicorn Use Case UML Diagram 53
Figure 21: S5 Enterprise Data Analytics Suite*Social Architecture 80
Figure 22: CAS SmartWe and OPEN Deployment 92
Figure 23: CAS OPEN Architecture 93
Figure 24: Major releases of Unicorn Integrated Framework 97
Figure 25: Development Lifecycle 98

List of Tables
Table 1: Mapping of functional requirements to user roles 31
Table 2: Policies for Content Streaming Application 41
Table 3: Define runtime policies and constraints use-case 54
Table 4: Develop Unicorn-enabled cloud application 54
Table 5: Package Unicorn-enabled cloud application 55
Table 6: Deploy Unicorn-compliant cloud application 56
Table 7: Manage the runtime lifecycle of a deployed cloud application 57
Table 8: Manage privacy preserving mechanisms into design time 58
Table 9: Manage privacy enforcement on runtime 59
Table 10: Manage security enforcement mechanisms 60
Table 11: Manage security enforcement mechanisms (enabler enforces security/privacy constraints) 61
Table 12: Monitor application behaviour and performance 62
Table 13: Adapt deployed cloud applications in real time 63
Table 14: Get real-time notifications about security incidents and QoS guarantees 64
Table 15: Perform deployment assembly validation 65

6
D1.2 Unicorn Reference Architecture

Table 16: Perform security and benchmark tests 66


Table 17: Manage cloud provider credentials 67
Table 18: Search for fitting cloud provider offerings 68
Table 19: Define application placement conditions 68
Table 20: Develop code annotation libraries 70
Table 21: Develop enablers enforcing policies via code annotations 71
Table 22: Provide abstract description of programmable cloud execution environment through unified API 71
Table 23: Develop and use orchestration tools for (multi-)cloud deployments 72
Table 24: Manage programmable infrastructure, service offerings and QoS 73
Table 25: Ensure secure data migration across cloud sites and availability zones 73
Table 26: Ensure security and data privacy standards 74
Table 27: Monitor network traffic for abnormal or intrusive behaviour 75
Table 28: Manage the Unicorn core context model 76
Table 29: Manage enablers enforcing policies via code annotations 77
Table 30: Manage cloud application owners 78
Table 31: Enterprise Social Data Analytics Relevance to Use Cases 83
Table 32: ubi:phone Relevance to use cases 86
Table 33: Prosocial Learning Relevance to use cases 89
Table 34: Cyber-Forum Relevance to use cases 95

7
D1.2 Unicorn Reference Architecture

Executive Summary
Unicorn deliverable D1.2 Unicorn Reference Architecture, hereafter simply referred to as D1.2, moves one step
closer to the fulfillment of the vision of the project which is the development of a framework that facilitates EU-
wide digital SMEs and startups to deploy cloud applications following the micro-service paradigm to multi-cloud
execution environments. In Unicorn D1.1 Stakeholders Requirements Analysis [1], we analyzed the particular
and demanding ICT needs of SMEs and startups by trawling leading industry studies and conducting personalized
interviews with our target audience. Through this analysis we extracted the functional and non-functional
requirements and user roles for the Unicorn Framework eco-system. Furthermore, we identified gaps in the
industry and academia that Unicorn fills in. Based on this comprehensive analysis, we define in D1.2 the overall
architecture of Unicorn and the components that comprise it, in complete alignment with the derived functional
and non-functional requirements.

The figure above illustrates a high-level overview of the Unicorn Reference Architecture. It is comprised of three
distinct layers: i) the Unicorn Cloud IDE Plugin, ii) the Unicorn Platform and iii) the Multi-Cloud Execution
Environment.

8
D1.2 Unicorn Reference Architecture

The Unicorn Cloud Plugin IDE is organized into two perspectives(facets). At the Development Perspective,
Application Developers, via the Annotated Source Code Editor develop secure, elastic, and privacy-aware cloud
applications using the annotative Design Libraries and Product Managers define design-time, run-time and
privacy policies and initiate the deployment process. At the Management Perspective, Application
Administrators, using the intuitive Graphical User Interface of the plugin, can monitor and manage deployed
applications. The plugin itself is built on top of the popular and open-source cloud IDE Eclipse Che [2], developed
and maintained by the Eclipse Foundation community. Reasoning behind Che being Unicorns IDE of choice
originates from Unicorns ICT SME/Startup survey results presented in D1.1, that have shown that Eclipse Che is
currently the most popular cloud IDE among EU-based startups and SMEs due to its collaborative development
capabilities, configurable run-time environments and embedded continuous integration and continuous delivery
features.

Within the scope of Unicorn, we define the term Unicorn micro-service as a chainable, horizontally and
vertically scalable-by-design, stateless service, which is adaptive to the underlying resources while exposing a
clear configuration and lifecycle management API. Unicorn micro-services are deployed to the Multi-Cloud
Execution Environment which consists of the following: i) resources (CPU, memory, network, storage etc.) in the
form of VMs bound on the infrastructure of multiple cloud providers and/or availability zones and regions, ii) an
overlay cross-cloud networking fabric, iii) a lightweight operating system, iv) a container engine and v) a
container management and orchestration tool. Unicorn relies on Docker [3], which, as the D1.1 survey results
indicate, is the top ranked container engine of preference among SMEs. Docker containers are light-weight self-
contained systems that run on a shared underlying operating system. Unicorn uses CoreOS [4] as the operating
system for the VMs. CoreOS is a unikernel-like lightweight and library-based operating system that provides
secure out-of-the-box support for container runtime engines such as Docker. Even though the CoreOS and
Docker approach is compatible with the Unicorn micro-service definition provided, it still suffers from limitations
such as Docker Engines single machine deployments management, that prevent Unicorn to achieve true multi-
cloud deployments.

To overcome the limitations imposed, the Unicorn Platform is developed to facilitate deployment and
orchestration of cloud applications across multi-cloud execution environments. The Unicorn Platform acts as
link between the Unicorn Cloud IDE Plugin and the Multi-Cloud Execution Environment and is the layer where
the business logic of Unicorn is applied. Its main tasks include: i) the validation of the submitted for deployment
Unicorn artefacts, ii) the interpretation of the design libraries annotations on the source code iii) the
enforcement of privacy, security and elasticity policies at run-time and compile-time based on the
aforementioned annotations on the respected enablers, iv) the application lifecycle management of deployed
applications and v) orchestration and management of resources and containers on the Multi-Cloud Execution
Environment. Unicorn uses Kubernetes [5], an open-source orchestration tool for containers running on a cluster
of virtual hosts. While Kubernetes is an orchestration tool for containers, it lacks the ability to (de-)provision
infrastructure resources. For this task, the Unicorn Platform relies on the Arcadia Smart Orchestrator [6]. Arcadia
Smart Orchestrator receives as input a directed and acyclic graph [7], the service graph, whose nodes represent
individual services/applications and its edges represent their relationships and interactions, expressed in XML
format using the OASIS Tosca Specification [8] and is responsible for managing resources on the infrastructure
layer.

As an innovative and advanced technologically project, Unicorn, among its original development it will also
provide contributions to open-source projects. Some of the contributions of Unicorn include:

9
D1.2 Unicorn Reference Architecture

An extension for Kubernetes to support cross-cloud network overlay management.


Contribution to the OASIS TOSCA Specification by extending it to support containerized execution
environments.
The creation of the Eclipse Che plug-in.

Additionally, D1.2 presents a set of use-cases that further elaborate each Unicorns actor roles and
responsibilities when using the final product. Specifically, we have identified 26 use-cases, each described in
detail covering possible alternative flows and exceptions and mapped to relevant functional requirements.
Moreover, to demonstrate the emerging, real-life need for the Unicorn platform, the Unicorn demonstrators
are elaborated as perceived in the initial stages of the project implementation and for each demonstrator, the
relevance of each Unicorn use-cases mentioned above is discussed. Unicorn demonstrators cover a wide,
representative spectrum of cloud applications, ranging from big data analytics (Demonstrator #1: Enterprise
Social Data Analytics) and encrypted voice communication (Demonstrator #2: Encrypted Voice Communication
Service over Programmable Infrastructure) to gaming (Demonstrator #3: Prosocial Learning Digital Game) and
cloud development platforms (Demonstrator #4: Cyber-Forum Cloud Platform for Startups and SMEs).

Finally, we elaborate on the approach to be followed to realise the functionalities described in this document
and to implement the components that constitute Unicorn framework.

10
D1.2 Unicorn Reference Architecture

Table of Abbreviations
API Application programming interface
CI Continuous Integration
CPU Central Processing Unit
CSAR Cloud Service Archive
DAO Data Access Object
DDoS Distributed Denial of Service
EU European Union
FPGA Field-programmable gate array
GPU Graphics Processing Unit
GUI Graphical User Interface
HTTP Hypertext Transfer Protocol
IaaS Infrastructure as a Service
ICT Information and Communications Technology
IDE Integrated development environment
IDS Intrusion Detection System
IP Internet Protocol
JDT Java Development Tools
KPI Key Performance Indicators
LXC Linux Containers
MAPE Monitor Analyze Plan Execute
NFV Network Function Virtualization
OS Operating System
QoS Quality of Service
REST Representational state transfer
SDK Software Development Kit
SDN Software Defined Networking
SME Small and Medium-sized Enterprise
SOA Service Oriented Architecture
SQL Structured Query Language
SSH Secure Shell
SYBL Simple Yet Beautiful Language
Tosca Topology and Orchestration Specification for Cloud Applications
UML Unified Modeling Language
VCS Version Control System
VM Virtual Machine
VNF Virtualized Network Functions
WP Work package
XCAML eXtensible Access Control Markup Language
XML Extensible Markup Language
XSD XML Schema Definition
YAML Yet Another Markup Language

11
D1.2 Unicorn Reference Architecture

1 Introduction
The aim of the Unicorn project is to empower the European digital SME and Startup eco-system by delivering a
novel and unified framework that simplifies the design, deployment and management of secure and elastic-by-
design cloud applications that follow the micro-service architectural paradigm and can be deployed over multi-
cloud programmable execution environments. Unicorn by nature is a technological advanced project and the
innovation activities leading towards designing and implementing the Unicorn eco-system are based upon both
utilizing and contributing to popular open-source and EU co-funded projects, as depicted in Figure 1. Like any
new and innovative technology, Unicorn will walk the extra mile and will contribute, either add-ons or
extensions, to the communities of these popular and open-source projects. In respect to this, Deliverable D1.2,
introduces the key technology axes where focus is devoted to defining a clear reference guide of the Unicorn
Framework Architecture and its comprising components along with how each component intercommunicates
to achieve the project business objects that will satisfy the demanding target audience it adheres too.

Figure 1: Technologies that Unicorn relies on and will contribute to

To this end, the Unicorn Reference Architecture is comprised of three distinct layers: i) the Unicorn Cloud IDE
Plugin, ii) the Unicorn Platform and iii) the Multi-Cloud execution environment, each based on different
technologies and projects, as described in Section 4. The Unicorn Cloud IDE Plugin will be the focal point of
interactions between target users and the underlying Unicorn Platform. It will use an intuitive graphical user
interface completely built on top of the popular and open-source cloud IDE Eclipse Che [2], developed and
maintained by the Eclipse Foundation community. Unicorns ICT SME/Startup survey, conducted during
Specification and Requirements phase of the project, has shown that Eclipse Che is currently the most popular
cloud IDE among EU-based startups and SMEs due to its collaborative development capabilities, configurable
run-time environments and embedded continuous integration and continuous delivery features. Unicorn will
take advantage of Eclipse Ches extensible nature and will develop a plugin specifically designed for cloud micro-
service [9] development and management.

12
D1.2 Unicorn Reference Architecture

Cloud applications in Unicorn will be bundled within Docker containers [3] following the micro-service
architectural paradigm. Docker containers are light-weight self-contained systems, bundled only with the
necessary libraries and settings required for a software to run, running on a shared operating system. Unicorns
operating system of choice will be CoreOS [4], a unikernel-like lightweight and library-based operating system
that provides secure out-of-the-box support for container runtime engines such as Docker engine [10]. Docker
Engine, while sufficient for small deployments, is limited to a single environment on a single machine.

To realize the projects vision, the Unicorn Platform to be developed should facilitate orchestration of
deployments across multi-cloud execution environments. To overcome the single-host limitation imposed by
Docker Engine, Unicorn will make use of Kubernetes [5], an open-source orchestration tool for containers
running on a cluster of virtual hosts. While Kubernetes is an orchestration tool for containers, it lacks the ability
to (de-)provision infrastructure resources. For this task, the Unicorn Platform will rely on the Arcadia Smart
Orchestrator. Arcadia Smart Orchestrator receives as input a directed and acyclic graph [7], simply denoted as a
service graph, whose nodes represent individual services/applications and its edges represent their
relationships and interactions, expressed in XML format using the OASIS Tosca Specification [8] and is
responsible for managing resources on the infrastructure layer. To this end, Unicorn will contribute to the Tosca
Specification by extending it in order to support containerized execution environments. Furthermore, Unicorn
will contribute to the open-source project Kubernetes, by providing an extension to support cross-cloud network
overlay management. In addition, for the Unicorn Platform to be able to perform policy enforcement, we will
develop the Unicorn Core Context model that describes applications, resources and policies and in general
provides semantic clarity by defining all needed entities within the project.

1.1 Document Purpose and Scope


The purpose of this document is to provide a comprehensive overview of the Unicorn reference architecture
and the interaction among the system components, and also document the use-cases that will be supported
within the context of the project demonstrators. In respect to this, D1.2 aims to derive a clear overview of the
Unicorn reference architecture, which will satisfy the system requirements that have been captured and
introduced in the requirement analysis scheme (D1.1) [1]. To this end, D1.2 provides a thorough system-level
architectural specification that comprises a high-level overview of the architecture layers and components, as
well as, the technology axes and contributions to open-source projects, that will guide the implementation of
the particular layers. As D1.2 will guide the development of the technical components comprising the platform,
this deliverable also includes an initial overview of the interaction patterns and intercommunication schemes
between system components, users and third-party entities. Thus, starting from the mapping of the system
requirements to platform components, each component will be further decomposed into high-level functional
blocks and supported primitives and interfaces. In turn, D1.2 introduces the analysis performed to derive use-
cases describing the implementation scenarios of the mechanisms that are to be developed within the scope of
the project demonstrators and their mapping to both system requirements and platform components.

1.2 Document Relationship with other Project Work Packages


With the definition of clear use-cases that will be supported along with the documentation of the Unicorn
Reference Architecture and the further decomposition of the basic system entities and the intercommunication
scheme among them, this deliverable D1.2, will be used as an agreed upon instruction set guiding the
development of the IT components that must be delivered by the Unicorn Project. Hence, D1.2 marks the
completion of Task 1.2 which includes the definition of the Unicorn Reference Architecture and T1.3 which

13
D1.2 Unicorn Reference Architecture

includes the definition of the supported use-cases. Figure 2 depicts the direct and indirect relationship of the
deliverable to the other tasks and work packages (WPs). The definition of the system-wide reference
architecture is milestone, along with the requirement scheme documented in D1.1 [1], in order to drive the
technical work of WP2-WP5 which intent to guide the implementation of Unicorns components. What is more,
with the clear definition of the project use-cases, demonstrator descriptions and the prioritization of
requirements to match the needs of the use-cases, the work in WP6 can begin as planned.

Figure 2: Deliverable Relationship with other Tasks and Work Packages

1.3 Document Structure


The rest of this deliverable is structured as follows:

Section 2 presents the key technological axes and State of the Art in developing applications following the micro-
service architectural paradigm, advances in cloud application development and management and more
specifically software frameworks, unified environments and related standards that facilitate the description of
cloud application features and requirements. This section also presents advances in the fields of application
monitoring, elasticity and privacy and security and how Unicorn will go beyond the state of the art technologies
presented.

Section 3 summarizes the functional and non-functional requirements and presents the identified Unicorn
actors as introduced in Unicorn D1.1 [1].

Section 4 focuses on the Unicorn Reference Architecture and explains in detail the various components that
compose it. It focuses on the features of the Eclipse Che Cloud IDE and how Che will be extended in the context
of Unicorn. This section also presents the notion of the Unicorn Core Context Model and its relationship with
other EU projects such as Arcadia, PaasWord and CELARs CAMF. In addition, the concepts of application design,
validation and deployment, application monitoring and elasticity, security and privacy are presented and
explained using flow diagrams.

Section 5 documents the Unicorn use-cases per identified user role and maps the use-cases to the respective
functional requirements.

14
D1.2 Unicorn Reference Architecture

Section 6 elaborates with the four demonstrator descriptions and relevance to the use-cases, technical
implementation information and business as well as technical challenges.

Section 7 is devoted on documenting the implementation guidelines that will be taken under consideration
during the realization of the architecture. More specifically, the already agreed and setup development circle is
analysed along with best practices that will be adopted.

Section 8 concludes the deliverable.

15
D1.2 Unicorn Reference Architecture

2 State of the Art and Key Technology Axes


Before proceeding with a comprehensive description of the Unicorn Reference Architecture, it is important to
elaborate on the State-of-the Art referring to the key technology axes relevant to the Unicorn project. In relation
to the Background and Terminology section of D1.1 [1], this section provides a reference guide to the specific
technologies that are embraced by the communities targeted by the Unicorn project.

2.1 Micro-Service Application Development Paradigm


Even if Unicorn as a platform is capable to provide scalability and elasticity features, these features have to be
used by appropriate applications designed with the ability to scale. For this reason, it is important for Unicorn
to clearly state the application paradigm that should be followed along with design and implementation
guidelines.

One of the software development paradigms that have emerged the last couple of years in order to resolve the
issues of modularity, distribution, scalability, elasticity and fault-tolerance is the micro-service architectural
paradigm. Although a globally acceptable definition of the term does not currently exist, the micro-service
architecture paradigm is considered as the result set that arises from the decomposition of a single application
into smaller pieces, often dubbed as services, that tend to run as independent processes and have the ability to
inter-communicate usually using lightweight and stateless communication mechanisms (e.g., RESTful APIs over
HTTP [11]) . These micro-services are built around business capabilities and are independently deployable by
fully automated deployment machinery. For micro-services, there is a bare minimum of centralized
management and such services may be written in different programming languages and even use different data
storage technologies [9].

In contrast to a -single and monolithic- application paradigm, micro-services are decomposed into services
organised around discrete business capabilities. Instead of all services being part of one enormous monolith,
each business capability is a self-contained service with a well-defined interface. The communication between
these services is usually handled by functional APIs that expose the core capabilities of each service. The
advantage of this is that separate teams are responsible for different aspects of the application allowing the
team to develop and test independently, while making the application able to scale independently and handle
failures in a much-improved way due to modularity and distribution. Micro-service architectural style can be
seen as an evolution of the SOA (Services Oriented Architecture) architectural style. The key differences between
the two are that while applications built using SOA services tended to become focused on technical integration
issues, and the level of the services implemented were often very fine-grained technical APIs, the micro-services
approach instead stays focused on implementing clear business capabilities through larger-grained business APIs
[12].

The micro-service paradigm has been initially introduced in D1.1 [1]. In this document, we are mostly interested
in providing explanation of how applications can be created using micro-service paradigm in order to be handled
in Unicorn and also, how Unicorn will handle these applications. The micro-services that are created are the
components and are considered the basic building blocks of the applications. They are used to compose
distributed applications with set of components that depend on each other. Each component has distinct
properties that can be configured before use. Some of them can be used as-is while others require other
components to operate. Service Graphs are composed of components operating together as a unit, however

16
D1.2 Unicorn Reference Architecture

being independently manageable. These relatively complex structures can range from single-component service
graphs to sophisticated complex services.

For the scope of Unicorn however we suggest the definition of micro-services based on a common set of
characteristics. In Unicorn, we have collected various guidelines, best practices and rules and also utilized
existing efforts in academia and research in order to describe the details of an elastic and scalable application,
and also help us to model the application for Unicorn Context Model. It is extremely crucial to identify the basic
principles that a developed service should have in order to be characterized as micro-service.

Basic rules like breaking a big monolith down to many small services and try to make services to serve a single
function are starting points for designing using micro-service paradigm. This will directly lead to communication
decisions using REST API and message brokers [13]. Implementation patterns that are suggested by IT leaders
like IBM [12] are the creation of the per-service Continuous Integration and Continuous Delivery pipelines even
if the code repository remains the same for application.

Among the most adhered guidelines and best practices that evolve around the micro-service architectural
paradigm, are identified by Sam Newman, author of the popular and best-seller Principles of Micro-Services
[14] ;

Model Around Your Business Domain: Domain-driven design can help you find stable, reusable
boundaries

Build a Culture of Automation: More moving parts means automation is key

Hide Implementation Details: One of the pitfalls that distributed systems can often fall into is tightly
coupling their services together

Embrace Decentralization: To achieve autonomy, push power out of the center, organizationally and
architecturally

Deploy Independently: Perhaps the most important characteristic of micro-services

Focus on Consumers First: As the creator of an API, make your service easy to consume

Isolate Failure: A micro-service architecture does not automatically make a system more stable

Make services Highly Observable: With many moving parts, understanding what is happening in your
system can be challenging

Another related act is the twelve-factor methodology [15] that can be applied to any software-as-a-service
application and can use any combination of backing services (database, queue, memory cache, etc.). This
methodology allows to create portable applications with build and deployment automation that allow
portability and scalability on modern cloud platforms. The 12 factors take under consideration all the lifecycle
of application development and deployment and are affecting the Codebase, Dependencies, Configuration,
Backing Services as attacked resources, Separation of Build and Run stages, Introduction of Stateless Process,
Usage of Port Binding, Concurrency, Disposability, Parity for the development and production, Treating of logs,
Management of Admin Processes).

On top of that, the emergence of programmable infrastructure added additional parameters that should be
taken under consideration during a strict definition of a micro-service. Programmable infrastructure allows

17
D1.2 Unicorn Reference Architecture

the dynamic reconfiguration of provisioned resources (VCPUs, memory, storage, bandwidth, security groups etc)
which are capabilities that are rarely taken under consideration during micro-service development.

The aforementioned rules and factors have been taken under consideration for the creation of Unicorns
definition of micro-services. According to this suggestion any micro-service that will be developed using Unicorn
should:

1. Be stateless in order to be horizontally scalable by design. Any service that is stateless can scale easily
with the usage of additional services such as network balancers or web balancers. Usually these services
were statically configured by administrators but emergence of the programmable infrastructure and the
Virtualized Functions (VFs) this task can be done using a cloud orchestrator. Creating an application (a
service graph in Unicorn) that is stateless is a challenging task since the entire business logic should
entail stateless behaviour in order to be horizontally scalable by design.
2. Be reactive to runtime modification of offered resources in order to be vertically scalable by design.
The developments in hypervisor technologies and OS kernels the last years, the ability of provision and
de-provision of resources on an operating system has been made easier and it should be taken under
consideration in design decisions.
3. Be agnostic to physical storage, network and general-purpose resources. Every Unicorn micro-service
should be capable of being ported to any general-purpose cloud infrastructure (e.g.: file-system
persistency should be avoided).
4. Expose required chainable interfaces. Micro-services will use these interfaces to create a service graph.
For the creation of service graphs using the components it is required to expose the needed interfaces.
Dynamic coupling of the services is highly valuable when the actual binding can be fully automated
during runtime. This requires from the developed micro-services to be chainable and use metadata for
the configuration of the coupling of the services.
5. Encapsulate a lifecycle-management programmability layer which will be used during the placement
of one service graph in the infrastructural resources. As micro-services in Unicorn should expose a basic
programmability layer which handle the high-level micro-service lifecycle (e.g. remove-chained
dependency) in order to be protected from violation created out a chaining constraint (e.g. delay, legal
aspects).
6. Expose its configuration parameters along with their metadata. A micro-service that provides a
configuration layer, can adapt to the new configuration without interrupting its main thread of
execution.
7. Expose quantitative metrics regarding the QoS of the micro-service. While microservice-agnostic
metrics are easily measured, the quantification of business-logic-specific metrics (e.g.: active sessions,
the average delay per each task) cannot be quantified if the micro-service developer does not implement
specific application-level probes that provide these metrics.

In conclusion the definition of a micro-service in the context of Unicorn is as follows:

A micro-service is a chainable, horizontally and vertically scalable-by-design, stateless service, which is


adaptive to the underlying resources while exposing a clear configuration and lifecycle management API.

18
D1.2 Unicorn Reference Architecture

2.2 Cloud Application Design and Management


This section provides a state-of-the-art analysis on technologies, tools and standards currently available either
in industry or in academia that provide application management functionality for cloud applications. Under the
perspective of the micro-service architectural paradigm and within the Unicorn context, modern cloud
applications should incorporate portability and interoperability features. In addition, cloud applications should
be adaptable during runtime and be able to provide real-time measurements of resource consumption. The
most challenging aspect of deploying and managing modern cloud application is tackling the restrictions and
obstacles that lead to real multi-cloud application deployments. Part of the Unicorn project is to provide the
means to realize cloud application monitoring, auto-scaling and management in multi-cloud execution
environments.

2.2.1 Cloud Application Portability, Interoperability and Management


Cloud computing facilitates scalable and cost-effective computing solutions, by rapidly provisioning and
releasing resources with minimal user management effort and cloud provider interaction [16]. Cloud has been
very useful in supporting various applications, including computation-intensive, storage-intensive and
bandwidth demanding Web applications. However, the number of providers offering a variety of cloud services
and capabilities has been dramatically increased over the past few years. Each provider promotes its own cloud
infrastructure, features, standards and formats to access the cloud in a unique and differentiated way from
others, preventing them from agreeing upon a widely accepted, standardized way to support cloud applications.
This prevents cloud application developers to compose a heterogeneous context of multiple cloud platforms
and offerings to deploy their cloud applications, which very often, leads to the developers to be locked in a
specific set of services from a concrete cloud environment [17]. Nevertheless, the need for multiple clouds to
be able to work seamlessly together is rising [18].

Cloud computing interoperability and portability are closely related terms and may often be confused. According
to R. Cohen [19] Cloud Interoperability refers to the ability for multiple cloud platforms to work together or inter-
operate while Cloud Portability is the ability of data and application components to be easily moved and reused
regardless of the choice of cloud provider, operating system, storage format or APIs. Numerous standardization
and specification efforts have taken place throughout the years trying to tackle portability restrictions across
different cloud providers and to enable interoperable application development, however no standard has been
universally accepted to resolve those issues yet.

The Topology and Orchestration Specification for Cloud Applications (TOSCA) [8] is an OASIS standard used to
describe both a topology of cloud-based Web services, consisting of their components, relationships, and the
processes that manage them, and the orchestration of such servicesthat is, their complex behaviour in relation
to other described services. By increasing service and application portability in a vendor-neutral ecosystem,
TOSCA aims at enabling portable deployment to any compliant cloud, smoother migration of existing
applications to the cloud, as well as dynamic, multi-cloud provider applications. Cloud Application Management
for Platforms (CAMP) [20] is another OASIS specification that aims at defining a harmonized API, models,
mechanisms and protocols for the self-service management (provisioning, monitoring and control) of
applications in a PaaS, independently of the cloud provider. Metsch et al. [21] describe the Open Cloud
Computing Interface (OCCI) which is a RESTful protocol and API, published by the Open Grid Forum (OGF) [22]
as a result of a community effort. The objective of the proposed standard is to define a shareable and
homogeneous interface to support all kinds of management tasks in the cloud environment. Although the

19
D1.2 Unicorn Reference Architecture

original scope of OCCI covered the creation of a remote management API for IaaS platforms, the proposed
interface is currently suitable to represent other cloud models, such as PaaS and SaaS, but it could also be applied
to other programming paradigms. Open Virtualization Format (OVF) [23] provides a platform independent,
efficient, open and extensible packaging and distribution format that facilitates the mobility of virtual machines
and gives customers platform independence. OVF takes advantage of the Distributed Management Task Forces
(DMTF) [24] Common Information Model (CIM) [25], where appropriate, to allow management software to
clearly understand and easily map resource properties by using an open standard.

Standardisation efforts by themselves are not sufficient to tackle interoperability and portability issues. Cloud
Application Management tools and libraries have been developed to further promote interoperability and
portability of cloud applications. Jclouds [26] and Libcloud [27] are open source libraries, the former a java library
and the latter a python one, provided by Apache, that allow easier management of different cloud resources
through a unified API. OpenNebula [28] on the other hand is an open-source data centre virtualization
technology, offering feature-rich, flexible solutions for the comprehensive management of virtualized data
centres to enable on-premise infrastructure as a service clouds. It provides many different interfaces that can
be used to interact with the functionality offered to manage physical and virtual resources.

A common denominator in all the aforementioned standards and tools is that none provide the ability to
describe and manage cloud services distributed across multiple availability zones and/or providers. To provide
multi-cloud support, Unicorn will introduce a new standard by extending the OASIS Tosca specification to
support secure and elastic by design multi-cloud application deployments. In addition, Unicorn will also
provide innovative deployment/orchestration capabilities supporting unikernel and containerized execution
environments addressing data portability and interoperability issues along with resource provider trust
validation issues.

2.2.2 Monitoring
With the vantage point of containers being portability and interoperability and the low overhead compared to
hardware virtualization, containerized application environments are being used to develop and deploy large-
scale distributed applications for data processing and servicing [29], [30]. However, with the adoption of
containerized solutions for micro-services, management solutions, particular monitoring and auto-scaling, have
to seamlessly manage more ephemeral and complex services at scale than ever before [31].

General-purpose monitoring tools such as Ganglia [32] and Nagios [33], have been traditionally used by system
administrators to monitor fixed distributed infrastructures, such as computing grids and server clusters. Cloud
consumers tend to adopt such solutions to monitor provisioned cloud infrastructural offerings as well. However,
cloud offerings provisioned for micro-service deployments have different requirements than fixed server
environments [9], [34], [35], as they consist of multiple service parts, hosted on ephemeral and on-demand
provisioned virtual resources. This makes the aforementioned monitoring tools unsuitable for rapidly elastic and
dynamic cloud deployments. Cloud monitoring tools such as Amazon CloudWatch [36] and AzureWatch [37]
provide Monitoring-as-a-Service to their consumers. Despite the fact that these tools are easy to use and well-
integrated with the underlying platform, their biggest disadvantage is that they limit their operation to specific
cloud providers. Thus, these tools lack in terms of portability and interoperability.

To address portability, Rak et al. [38] introduce the mOSAIC monitoring system which collects metrics in a cloud-
independent manner via the mOSAIC API across supported cloud platforms. Similarly, Al-Hazmi et al. [39]

20
D1.2 Unicorn Reference Architecture

introduce monitoring for federated clouds to collect low-level monitoring data from deployments and distribute
them to user-deployed aggregators through message notification queues, much similar, to Openstacks
Ceilometer [40]. Although an interesting approach, feasibility is limited to IaaS deployments without support for
rapid elasticity. In turn, MonPaaS [41], is a distributed nagios-based monitoring solution for PaaS monitoring
although tightly coupled to Openstack. On the other hand, JCatascopia [42] is an open-source monitoring tool
developed with emphasis on support for rapidly elastic multi-cloud deployments by featuring a novel agent
bootstrapping process based on a variation of the pub/sub communication protocol to dynamically detect when
virtual instances have been (de-)provisioned due to elasticity actions. This process diminishes the need for re-
contextualization when providing elasticity support, by continuously reflecting the current topology and
resource configuration. However, JCatascopia is not tailored for containerized-environments as it is agent-based
supporting solely IaaS monitoring requiring a file-system for deployment.

In regards to application monitoring, state-of-the-art solutions such as New Relic APM [43], Datadog [44] and
AppDynamics [45] provide monitoring libraries supporting both IaaS and PaaS deployment models for various
programming languages (e.g., java, ruby, python) and frameworks (e.g., tomcat, django, sql). These tools expose
APIs for application performance collection and application-specific metric injection. Metrics are periodically
disseminated to shared data warehouses allowing users, depending on their subscription model, to perform
queries on real-time and historical data and produce analytic insights. From these tools New Relic APM and
Datadog have recently been tested on containerized environments. However, these tools are not tailored to
containerized environment (and unikernel) unique characteristics, thus, increasing image sizes with also reports
from users that the monitoring system which should be non-intrusive, is reporting high runtime footprints and
hogging user-paid virtual resources which otherwise are deployed for application consumption [46].

In regards to containerized environment monitoring, Docker Stats [47] is the native monitoring tool for Docker
that provides users with command line access to basic monitoring system data for the application containers
running on the machine hosting the Docker engine. While monitoring data can be streamed automatically, the
biggest downside of Docker Stats is that it does not store monitoring data and, thus, no point of reference for
historic data access is available. On the other hand, cAdvisor [48] is an open-source monitoring tool developed
by Google for container monitoring with native support for Docker containers. cAdvisor collects and visualizes
graphically real-time monitoring data relevant to applications in active state. In particular, cAdvisor hooks itself
to the endpoint of the Docker daemon running on each host and immediately starts collecting data from all
running containers, including cAdvisor which is deployed as a container as well. To store and export historic
monitoring data cAdvisor provides support for different backends such as ElasticSearch, InfluxDB, BigQuery and
Prometheus. However, cAdvisor can only monitor one docker host and thus, for multi-node deployments
monitoring data will be disjoint and spread throughout each of the cluster cAdvisors. In contrast to cAdvisor,
Scout [49], a paid monitoring-as-a-service solution, aggregates data from multiple hosts and visualizes the data
over longer timeframes. Interestingly, Scout offers users the ability to define metric alerts that send email
notifications if metric surpass or undergo below a configured threshold. However, in the case where a
deployment consists of heterogeneous application containers on the same host (e.g., web and data-backend
containers), Scout does not support the configuration of alerts per container type as alerts are applied to all
containers on each host.

In general, container monitoring features a gap which calls for fulfilling despite tools being recently introduced.
Recent trends in monitoring present a movement towards monitoring-as-a-service which eases management of
the monitoring infrastructure. However, there still exists a number of challenges, especially, in the field of multi-

21
D1.2 Unicorn Reference Architecture

cloud containerized execution environments, such as: (i) the increased movement of monitoring data across
geo-distributed availability zones to central monitoring and processing endpoints; (ii) data restrictions and
security risks of disseminating sensitive human, governance and application performance data across availability
zones especially when moving from IaaS monitoring to application and client-side monitoring (e.g., customer
behaviour, transactions); and (iii) the significant cost and actual runtime overhead imposed to monitor
ephemeral and highly dynamic decomposed cloud (micro-services) over virtualized, shared and compact
conceptualized execution environments. Therefore, Unicorn will provide the micro-service cloud community
with a monitoring tool which is (i) portable, thus capable of supporting multiple cloud provider offerings; (ii)
tailored to the particular characteristics of containerized execution environments; (iii) self-adaptive to reduce
monitoring overhead and costs, as well as, scaling to the needs of the platforms users; and (iv) transparent,
requiring no addition effort and configuration for cloud consumers as the deployment spans across multiple
availability regions and cloud sites.

2.2.3 Elastic Scaling


Elasticity is defined as the degree to which a system is able to adapt to workload changes by provisioning and
de-provisioning resources in an autonomic manner, such that at each point in time the available resources match
the current demand as closely as possible [16]. It is used to avoid inadequate provision of resources and
degradation of system performance while achieving cost reduction [50], making this service fundamental for
cloud performance.

Existing scalability mechanisms in Cloud Computing typically consider a single cloud provider and commonly rely
on replicating application components according to scalability rules that rely on events triggered by cloud
monitoring. Amazons auto scaling service [51] was one of the first auto scaling services offered through its AWS
cloud. It employs simple threshold-based rules or scheduled actions based on a timetable to regulate
infrastructural resources (e.g., if CPU usage is above 70% then add a new VM). Similar simple rule-based elasticity
offerings have been implemented by other cloud providers, such as Googles cloud platform autoscaler [52],
Microsoft Azures Autoscale [53] and Rackspaces Auto Scale [54]. In regards to container scaling, AWS
introduced ECS [55], a container management service that supports docker containers, offering the ability to
scale containers through Service Auto scaling [56]. Googles container engine [57], uses Kubernetes [5], an
orchestration tool for containers that offers an autoscaling feature which scales by observing CPU usage.
However, the challenge to coordinate elasticity between a virtual machine and its placed containers remains
unaddressed. The problem here is that resizing container resources is limited by the resources of the virtual
machine in which it is placed, thus, after certain limits the container cannot gain more resources.

Apart from elasticity control services, substantial work based on either reinforcement machine learning, control
theory, and time-series analysis is available on elasticity behaviour analysis and algorithmic determination of
compliant scaling actions. Almeida et al. [58] propose a branch and bound approach in order to optimize
adaptation process of multi-cloud applications during runtime, while Tolosana-Calasanz et al. [59] propose a
shared token bucket approach for dynamic resource allocation of data processing engines. A more intuitive
approach is proposed by Dustdar et al. [60], defining elasticity as a complex property, having as major
dimensions resource, cost and quality elasticity, capturing not only computing related aspects of application
operation, but also business aspects. In turn, Copil et al. [61] introduce an elasticity specification language,
denoted as SYBL, which allows the definition of complex and multi-dimensional elasticity policies for rSYBL, an
elasticity controller capable of managing cloud elasticity based on SYBL directives. On the other hand,
Tsoumakos et al. [62] introduce an open-source elasticity control service, which models the problem of elastic

22
D1.2 Unicorn Reference Architecture

scaling NoSQL databases as a Markovian Decision Process and utilize reinforcement learning to allow the system
to adaptively decide the most beneficial scaling action based on user policies and past observations, while
Naskos et.al. [63] extend this model to resizing clusters of a single generic application hosted on virtual
machines.

In regards to elastically scaling multi-cloud deployments, ongoing research is currently putting emphasis on
cloud federation (Bermbach et al. [64], Kondikoppa et al. [65]). In turn, Ferry et al. [66] propose an approach for
the continuous management of scalability in multi-cloud systems, based on ScaleDL and CloudMF [67]. Jiao et
al. [68] propose a multi-objective data placement technique for multi-cloud socially-aware services, considering
multiple objectives such as latency, locality, or cost. Last but not least, 4CaaSt framework [69] introduces cloud
blueprints to support elastic scaling of cloud applications in heterogeneous environments with PaaS level
resource management, allowing the customer to define elasticity rules based on KPIs.

Based on the above statements, current solutions for managing elastic resource allocation are limited to metric
violation rules which are rather limited and require significant manipulation from the user to achieve optimal
resource allocation in order to significantly reduce costs and allocate resources efficiently. A number of
algorithms and elastic services have been proposed to support more complex control and indeed, techniques
such as the SYBL elasticity specification language [61], MELA [70] and TIRAMOLA [62], developed within the
lifespan of the CELAR FP7 project, are pointed to the right direction. However, challenges such as ping-pong
and cold-start effects are pending challenges that must be addressed. In turn, while multi-cloud elasticity has
been researched by following a federated cloud approach, this area is still far from realization, especially when
referring to vendors capitalizing large stakes in the cloud market. The lack of standardized APIs remains a major
challenge for multi-cloud elastic scaling, since cloud providers and platforms use their own technology and
techniques, making difficult for clients to exploit the great advantages of multi-cloud deployments. Therefore,
Unicorn will provide to the cloud community a tool that (i) supports multi-cloud elasticity control,
acknowledging heterogeneity among applications services and functionalities of each cloud provider; (ii) is
capable of supporting fine-grained control over VMs and container-based applications, using diagonal scaling;
(iii) will improve elasticity control based on semi-supervised algorithms utilizing both reactive and proactive
techniques; (iv) is transparent, requiring no addition effort and configuration for cloud consumers as the
deployment spans across multiple availability regions and cloud sites; (v) is enclosed as an independent
framework allowing state-of-the-art elasticity controllers to utilize these techniques to enhance their control
mechanisms for better scaling decisions regarding user-defined optimization strategies.

2.3 Cloud Application Security and Data Privacy Enforcement


In this section, we are describing Unicorns advances in security and data privacy enforcement by leveraging the
security- and privacy-by-design principles of the Unicorn architecture. We specifically address the key innovation
goals of Unicorn namely enforcing data privacy through access control policies and encrypted persistency,
security and data restriction policy enforcement, and risk and vulnerability assessments.

2.3.1 Data privacy-by-design and encrypted persistency


Security of sensitive information is one of the key challenges in storing data on the cloud today. Enterprises
require that sensitive data be stored in encrypted form and encryption keys never be accessible to malicious
users. The data persistency layer (databases) of cloud applications is a prime target for takeover by adversaries
of any enterprise. Database related attacks (such as SQL injection) are especially hard to address with typical

23
D1.2 Unicorn Reference Architecture

corporate security fences, such as intrusion prevention systems and web application firewalls, and recent
research has looked into related challenges [71].

It is a well-known fact that cloud applications and software platforms handling sensitive data should encrypt
their data, and encryption should be based on approved algorithms using long, random keys [72]. It is good
practice to perform encryption on the client side, and data should remain encrypted in transit, at rest, and when
in use. The cloud provider staff should never have direct access to decryption keys. In case of a successful attack,
post-exploitation of sensitive data is possible using tools with GPU processing power, able to crack a cipher if a
symmetric encryption algorithm is used for data protection [71].

To address the aforementioned challenges, Unicorn utilizes PaaSword framework outcomes [73] that provides
cloud applications with access to a distributed and encrypted data persistence layer (commonly referred to
as a Virtual Database). From this framework Unicorn can utilize the facility to annotate data access objects
(DAOs) with the desired encryption on table or row level. The encryption scheme supported by Unicorn will
also allow the search over encrypted data. In that way, the confidentiality and integrity of the stored data is
ensured. Unicorn will advance the state of the art by extending PaaSword, while evaluating and selecting
appropriate encryption schemes to protect index tables of a Virtual Database for SQL databases.

2.3.1.1 Data privacy based on context-aware access control


Unicorn also envisions to enhance the security aspects of the deployed applications by providing a mechanism
that enables context-aware access control on the applications, with great granularity. For this reason, the
PaaSword Context Aware Security Model [74] (based on the XCAML standard [75]) will be used as the core of
the context-aware authorization mechanism that Unicorn envisions. This model will be supported with the
capability to annotate data access objects (DAOs) with appropriate annotations for defining policies at code
level. Configuration and enforcement of policies should also be supported by Unicorn and for this reason the
appropriate intervention mechanism shall be developed, probably by adapting and extending PaaSword or
by using Drools [76] as a constraint satisfaction solver.

2.3.2 Security and Data Restriction Policy Enforcement Mechanism


Cloud computing platforms are often hosted on publicly accessible infrastructures and subject to a wide variety
of security attacks. Most attacks are carried out over the network and can thus be recognized and intercepted
by examining the information flow exchanged between the cloud based software system and the outside world.
Traditional attacks such as IP spoofing, Address Resolution Protocol (ARP) spoofing, flooding, Denial of Service
(DoS), Distributed Denial of Service (DDoS) etc. constitute key pain points for cloud computing operators and
users. Relatively simple outside attacks can be prevented using firewalls but more complex outside or insider
attacks make the incorporation of efficient intrusion detection systems (IDS) and intrusion prevention systems
(IPS) a necessity in cloud infrastructure [77].

Several IDS techniques are applicable and can be used in the cloud. Signature-based intrusion detection defines
a set of rules or signatures that can be used to decide that a given pattern identifies an intruder and can be used
to detect known attacks. Anomaly detection identifies events that appear to be deviating from normal system
behaviour, a technique that is applicable to detecting unknown attacks at different levels. Methods such as
Artificial Neural Networks, Fuzzy logic, Association rule mining, etc. can handle uncertain or partially-true data
[78].

24
D1.2 Unicorn Reference Architecture

In Unicorn, we capitalize on significant know-how on deploying effective intrusion detection systems (examining
packet level network traffic at user level, or within the hypervisor, or in the form of a honeypot). We advance
the state-of-the-art by leveraging Unicorns security-by-design aspects in the configuration of the security and
privacy enforcing mechanisms and allowing Cloud Application Developers to enforce security requirements
and policies through code annotations or appropriate model specifications, to be realized through
corresponding enforcement libraries. Pre-existing IDS-related annotations will be made available to Cloud
Application Developers, enabling them to select one of predefined IDS types and configurations (e.g. IDS with
different rule sets). The Unicorn privacy and security enforcement mechanisms will be deployed and
appropriately customized as part of the Multi-Cloud Execution Environment during the deployment phase of
an application, and in the context of a bundle of security services running in the background (security agent).
Security incidents detected by the enforcement mechanisms will be treated as high-priority events relaying
through monitoring and notification channels. Responding automatically to such events, such as by rapidly
increasing service capacity to avoid deterioration for legitimate users, will also be possible in Unicorn. A
regression based approach [79] will be used to estimate the strength of an attack so as to more accurately
provision service capacity.

Network intrusion detection and prevention are resource-intensive processes. They typically rely on a set of
rules compared against network packets, where most of the time, each byte of every packet needs to be
processed as part of the string searching algorithm looking for matches among a large set of strings from all
signatures that apply for a particular packet. If a string contained in a packet payload satisfies all the conditions
of one of the rules, an associated action is taken. This string comparison is a highly computationally-intensive
process, accounting for about 75% of the total CPU processing time of modern network IDSs [80], [81]. Unicorn
will further advance the state-of-the-art by using Graphics Processing Units (GPUs) as a means of accelerating
the inspection process in its IDS [82]. The use of GPUs in security applications has been explored in the areas
of cryptography [83], data carving [84] and intrusion detection [81], [84]. The authors of [82] achieved to boost
the processing throughput of the Snort IDS tool, by offloading the string matching operations to GPU, by a
factor of three. Other attempts to use specialized hardware to accelerate the inspection process of intrusion
detection systems including content addressable memory (CAM) [85], [86] or specialized hardware circuit
implemented on FPGA [87], either have high cost or the whole procedure is difficult and time consuming.

3.3.3 Risk and Vulnerability Assessment


Security mechanisms that are applied before the deployment of an application cannot predict the full extent of
threats that the application will face at runtime, when it is installed and instantiated on a cloud environment.
An application may be exposed to security threats due to source code vulnerabilities found in third-party
processes, libraries, or images, which are beyond the control of the application. This is the reason why a trusted
cloud platform should be continuously monitored and risks and security threats that are facing applications be
assessed at all times.

Unicorn will enforce risk estimation based on the characteristics of the cloud infrastructure, the type of the
application and the degree of the required protection through a respected vulnerability assessment component
and will produce associated reports presented in a visual manner through the appropriate IDE view.

The aforementioned Unicorn vulnerability assessment component will utilize a toolset for network exploration
and security auditing, which will serve as a security scanner to check if the deployed application conforms with
the security policies selected by the Cloud Application Developer through the use of annotations. This toolset

25
D1.2 Unicorn Reference Architecture

will be able to identify what services are running in a deployment environment, to "fingerprint" the operating
system and all the running applications, and to perform an inventory of services and activities on the local
network. This information is crucial since any differentiation that does not conform to the enforced security
policies indicates a new security threat that should be reported and acted upon.

One of the currently used approaches is the Security Vulnerability Assessment (SVN) process [88] based on the
Nessus, Nmap and Nikto tools. The approach of [89] uses Nessus 5 and the Common Vulnerability Scoring System
(CVSS) to rate vulnerabilities ranging from Info to Critical. Nessus is also used in [90] to discover known
vulnerabilities in Amazon Machine Images (AMI). Work in [91] in the Online Penetration Suite (OPS) features
pre-rollout scans of virtual machines for security vulnerabilities using established techniques and prevents
execution of compromised virtual machines. Nessus is a component in OPS architecture. Finally, a comparison
of Nessus and Metasploit is presented in [92] in determining vulnerabilities of virtual machines with three
different operating systems (Windows, Fedora, Ubuntu).

In Unicorn, we are advancing the state-of-the-art by contributing a configurable vulnerability assessment


toolset leveraging the security and privacy-by-design aspects of Unicorn. In particular, a vulnerability
assessment toolset will be able to compare the probed/scanned view of the deployed resources to the
modelled information (descriptions of used resources) specified by trusted users (Cloud Application
Developers) and detect deviations in the modelled vs. deployed views. When the status of the infrastructure
is deemed to be at a higher vulnerability level, users will be notified via the appropriate IDE view. Additionally,
Cloud Application Developers will be able to select from a set of vulnerability-assessment-related annotations
at design time to pass specific configuration information (such as prioritization in the order and intensity of
vulnerability scans) to the vulnerability assessment toolset according to their needs and application policies.

2.4 Containerization and Cluster Orchestration


Containerization is a virtualization method, for deploying and running distributed applications without the need
to launch entire virtual machines (VMs) on the host operating system. Containers effectively partition the
resources managed by a single operating system into isolated groups to better balance the conflicting demands
on resource usage between the isolated groups. The differences between other virtualization technologies has
been presented in D1.1 [1]. However, the most important difference is that the containerization paradigm allows
to create virtual instances that share the same host operating system and relevant binaries, dependencies
and/or (virtual) drivers, while application containers hold components such as files, environmental variables,
and libraries required to run the desired software.

26
D1.2 Unicorn Reference Architecture

Figure 3: Container-based Virtualization

Because containers do not have the overhead of an entire guest operating system required by VMs to operate,
their size is smaller than VMs which makes them easier to migrate, faster to boot, require less memory and as a
result, it is possible to run many more containers on the same infrastructure rather than VMs [93]. Containers
can run instructions native to the core CPU without any special interpretation mechanisms. In turn, application
development with the use of containers is perfect for a micro-service approach as under this model, complex
applications are split into discrete and modular units where e.g., a database backend might run in one container
while the front-end runs in a separate one. Hence, containers reduce the complexity of managing and updating
the application because a problem or change related to one part of the application does not require an overhaul
of the application as a whole [94]. This separation of units also improves greatly the security of the complex
application as any security vulnerabilities are affecting one component are not directly affecting the application
as whole. The savings realized by sharing these resources, while also providing isolation, mean that containers
have significantly lower overhead than true virtualization.

Container technology examples in Linux OS include Linux-Vserver [95], OpenVZ [96] and FreeVPS, while in non-
Linux operating systems Solaris Zones [97] and BSD jails [98] are examples of containers. While each of these
technologies has matured, these solutions have not made significant strides towards integrating their container
support into the mainstream Linux kernel [99]. This has happened with Linux Containers (LXC), a technology that
which utilized the Linux kernel mechanisms (cgroups, namespaces). LXC is a set of tools, templates, library and
language bindings that covers the containment features supported by the upstream kernel and as result
provides operating system-level virtualization through a virtual environment that has its own process and
network space. For this reason, modern Linux containers are often considered as something in the middle
between a chroot and a full-fledged virtual machine.

The most popular container platform is Docker Engine [3] which is built based on top of the linux kernel,
namespaces, cgroups, chroot, and file system constructs. Docker originally chose LXC as the engine but
recently developed their own solution called libcontainer [100], as shown in Figure 4. Libcontainer enables
containers to work with Linux namespaces, control groups, capabilities, security profiles, network interfaces and
firewalling rules in a consistent and predictable way, without relying on Linux userspace components.

27
D1.2 Unicorn Reference Architecture

Figure 4: Usage of Linux containerization toolkit by Docker

Docker provides a complete toolset that gives the ability to package and run containerized applications
(Container Engine) and to manage the lifecycle of containers. Other examples of container engines are: CoreOS
rkt (Rocket) [101] , LXD [102] or Cloud Foundrys Garden/ Warden [103] . Although Docker is practically leading
the market [104], rkt has gained popularity due to its tight integration with CoreOS Container Linux. In Unicorn,
there is interest to support both Docker and rkt.

With usage of containers like Docker, all containers on a given host run under the same kernel, with only
application resources isolated per container. This improves security by isolation and making the host OS daemon
managing co-located containers one of the remaining critical points, as it is an attacking surface for exposed
vulnerabilities [105].

To improve isolation by providing secure containerization, and still adhere to the Linux kernel principles, CoreOS
Container Linux, depicted in Figure 5, was designed to alleviate and improve many of the flaws inherent in
Docker's container model [105]. In particular, CoreOS is a lightweight Linux operating system designed for
clustered deployments providing automation, security, and scalability. It is features a read-only linux rootfs
with only etc being writable that allows containers to be isolated, even co-located ones, and to reach each
other communication is handled over the IP network while network configurations are exchanged over etcd.

Figure 5: CoreOS Host and Relation to Docker Containers

28
D1.2 Unicorn Reference Architecture

For the deployment and orchestration of containers, frameworks such as Docker Swarm [106], Kubernetes [5]
or even Apache Mesos [107] can be used for the management of clusters based on containers. Apache Mesos
does not support provisioning of containerized resources but Docker and Kubernetes allow the definition of
initial container deployment and also the management of multiple containers as one entity, for purposes of
availability, scaling, and networking. In Unicorn, we are interested on the definition of service graphs that
represent applications using the micro-service paradigm, and then provision of the micro-service in a pool of
available resources. This provisioning process however is not possible to be performed using Docker or
Kubernetes, as these frameworks are unaware of the available resource pool. For this reason, Unicorn will use
the Arcadia Orchestrator [6], a tool that supports the provision, configuration and management of containerized
applications in multiple sites. On top of the cluster management capabilities that the aforementioned tools offer,
Arcadia is aware of the resource pool and also provides the notion of service graph that is compliant with Unicorn
vision to allow the deployment of complex applications as connected microservices. For Unicorn, the Arcadia
Orchestrator will be extended to support open cloud topology description (TOSCA) deployments to supported
cloud providers with adaptors developed to also support cloud providers such as AWS, Google Compute Engine
and Docker. Finally support for one of the latest trends in virtualization technology, the unikernels, will also be
provided by utilizing and extending the Arcadia Orchestrator. Unikernels are specialized single-purpose images
disentangling applications from the underlying operating system as OS functionality is decomposed into modular
and pluggable libraries that developers select, from a modular stack.

29
D1.2 Unicorn Reference Architecture

3 Unicorn System Requirements and User Role Overview


The purpose of this section is to perform a brief overview of the identified user roles and the system
requirements. The requirements analysis phase was documented in the frame of the Unicorn Project
Deliverable D1.1 Stakeholders Requirements Analysis [1] . Functional requirements affect the definition of the
architecture as well as the reference implementation. All types of requirements have been formulated based on
the needs of end users and have been mapped to identified user roles.

As a first step of the requirements collection procedure, all Unicorn roles have been identified and are depicted
in Figure 6. The identified user roles include:

a) Cloud Application Owner that is the person providing the vision for the application as a project,
gathering and prioritizing user requirements and overseeing the business aspects of deployed
applications.
b) DevOps Team which is consisted by the following team members:
a. Cloud Application Product Manager that defines the cloud application architecture and
implementation plan based on the Cloud Application Owners requirements and is also
responsible for packaging the cloud application and enriching the deployment assembly with
runtime enforcement policies.
b. Cloud Application Developer that develops a cloud application by using the Unicorn-compliant
code annotation libraries.
c. Cloud Application Administrator that is responsible for deploying and managing the lifecycle of
developed and Unicorn-compliant cloud applications. This person ensures the application runs
reliably and efficiently while respecting the defined business or other incentives in the form of
policies and constraints.
d. Cloud Application Tester who is responsible for the quality assurance and testing of a Cloud
Application
c) Unicorn Administrator who is the person responsible for managing and maintaining the Unicorn
ecosystem, which includes infrastructure, various software and architectural components
d) Unicorn Developer that creates Unicorn related (software) components for compliant Cloud Providers
and/or DevOps Engineers
e) Cloud Provider that provides cloud offerings in the form of programmable infrastructure according to a
service-level agreement. The Cloud Provider is also responsible to operate the Cloud Execution
Environments that will host entirely or partially Unicorn-compliant Cloud Applications.
f) Cloud Application End User who is the person using the deployed Unicorn-compliant cloud application.

30
D1.2 Unicorn Reference Architecture

Figure 6: Identified Unicorn Actors

Each of these identified roles imposes many technical requirements. Some of these requirements may overlap
among users of the platform which, at first, may seem to lead to misleading interpretation of user role duties.
Finally, we note that some of the user presented may not be assigned to any functional requirements (e.g., Cloud
Application End User), however their existence contributes into having a more complete description of the
overall system.

Table 1 presents an overview of the identified system functional requirements and their relations to Unicorn
use roles. The identification process of the Unicorn functional requirements involved active contribution by all
partners and contained an interview process with potential stakeholders in the SME and start-up eco-system
and an extensive research and analysis of the market state.

Table 1: Mapping of functional requirements to user roles

ID Functional Requirement User Roles

FR.1 Develop cloud application based on code annotation design Cloud Application Developer
libraries and define runtime policies and constraints
FR.2 Securely register and manage cloud provider credentials Cloud Application Product
Manager, Cloud Application
Admin, Unicorn Developer
FR.3 Search interface for extracting underlying programmable cloud Cloud Application Product
offerings and capability metadata descriptions Manager
FR.4 Creation of Unicorn-compliant cloud application deployment Cloud Application Product
assembly Manager
FR.5 Cloud application deployment bootstrapping to a (multi-) Cloud Application Admin, Cloud
cloud execution environment Provider, Unicorn Developer
FR.6 Deployment assembly integrity validation Cloud Application Tester, Unicorn
Developer
FR.7 Access application behavior and performance monitoring data Cloud Application Admin
FR.8 Real-Time notification and alerting of security incidents and Cloud Application Admin
QoS guarantees
FR.9 Autonomic management of deployed cloud applications and Cloud Application Admin, Cloud
real-time adaptation based on intelligent decision-making Provider
mechanisms

31
D1.2 Unicorn Reference Architecture

FR.10 Manage the runtime lifecycle of a deployed cloud application Cloud Application Admin, Unicorn
Developer
FR.11 Application placement over programmable cloud execution Cloud Application Developer,
environments Cloud Application Product
Manager, Cloud Application
Admin, Unicorn Developer
FR.12 Register and manage cloud application owners Unicorn Admin
FR.13 Manage the core context model Unicorn Admin
FR.14 Register and Manage enablers interpreting Unicorn code Unicorn Admin
annotations
FR.15 Unified API providing abstraction of resources and capabilities Cloud Application Product
of underlying programmable cloud execution environments Manager, Unicorn Developer
FR.16 Resource and service (de-)reservation over multi-cloud Unicorn Developer
execution environments
FR.17 Development of code annotation libraries Unicorn Developer
FR.18 Development of enablers interpreting Unicorn code Unicorn Developer
annotations
FR.19 Register and manage programmable infrastructure and service Cloud Provider
offerings
FR.20 Monitor cloud offering allocation and consumption Cloud Provider
FR.21 QoS advertising and management Cloud Provider
FR.22 Register and manage privacy preserving encrypted persistency Cloud Application Developer,
mechanisms for restricting data access and movement across Cloud Application Admin, Unicorn
cloud sites and availability zones Developer
FR.23 Register and manage persistent security enforcement Cloud Application Admin, Cloud
mechanisms for runtime monitoring, detecting and labeling of Provider
abnormal and intrusive cloud network traffic behavior
FR.24 Automated application source code and underlying cloud Cloud Application Admin, Cloud
resource offering vulnerability assessment, measurement and Provider
policy compliance evaluation
After the capturing and the harmonization of the functional requirements, a specific process that aimed to
identify non-functional requirements has been initiated. Non-functional aspects relate to the desired quality
aspects that should be satisfied by the architectural components that will satisfy the functional requirements.
To this end, specific quality aspects that are imposed by ISO/IEC25010 [108] have been elaborated. Since the
scope of these aspects is quite broad, through a filtering process the non-functional characteristics that should
be satisfied by the developed solution have been selected and presented in Figure 7.

Figure 7: Non-functional requirements

32
D1.2 Unicorn Reference Architecture

4 Unicorn Reference Architecture


This section provides an in-depth architectural overview of the Unicorn framework through a series of various
views and flows analysing and explaining the expected behaviour of the components comprising Unicorn. This
section will also accommodate readers to comprehend how Unicorn as a framework addresses the functional
and non-functional requirements described in Unicorn D1.1 [1]. Figure 8 depicts a coarse-grained view of the
Unicorn components which are seen to be logically grouped into three distinct layers: Namely, i) The Unicorn
Cloud IDE Plugin layer, ii) The Unicorn Platform layer and iii) The Multi-Cloud Execution Environment layer.

Figure 8: Unicorn Reference Architecture

33
D1.2 Unicorn Reference Architecture

The Unicorn framework design is oriented towards supporting cloud application developers and administrators
to create, deploy and manage cloud application deployments more efficiently. To this end, the Unicorn
framework automates many tasks such as code annotation validation, application deployment and application
monitoring. Moreover, Unicorn framework is responsible for managing and orchestrating services, validating
the connection between them and generating separately deployable artefacts (bundled as Docker containers)
for each service. The development of cloud applications following the micro-service architectural pattern is
performed using the Unicorn cloud-based IDE plugin. Through this environment, developers can use specific
code annotations, bundled in Unicorns application design libraries, that are validated by the Unicorn Platform
and are automatically interpreted by the respected Unicorn orchestrated services at both compilation and run
time.

Figure 9: Eclipse CHE High-Level Architecture

The Unicorn IDE plugin is developed for the popular and open-source cloud IDE, Eclipse Che [109], developed
and maintained by the sustainable Eclipse Foundation community. Eclipse Che provides application developers
with the ability to develop micro-services following the Bring-Your-Own-Device (BYOD) paradigm, thus, without
the need to configure software development tools locally. As shown in the Unicorn ICT SME/Startup survey,
conducted for the requirements analysis scheme of D1.1 [1], Eclipse Che is currently the most popular cloud IDE
for the EU-based ICT startup ecosystem. Eclipse Che is a general-purpose software development environment.
To use Eclipse Che, as depicted in Figure 9, a developer via a web browser downloads the IDE as a single web
application page from the Che server which may be publically serviced by the Eclipse community or be a
(customized) privately hosted Che Server deployment. The web application provides UI components such as
wizards, panels, editors, menus, toolbars, and dialog boxes. As a user interacts with the Web application, they
create workspaces, projects, environments, machines, and other artifacts necessary to code and debug a
project.

The IDE communicates with Eclipse Che over a set of RESTful APIs that manage and interact with a Workspace
Master that controls the workspaces. Each workspace is securely isolated from other workspaces and is
responsible for managing the lifecycle of the components that are contained within it. A workspace contains
one or more runtimes (machines). A machine is part of a Che workspace and is created from a runtime stack. A

34
D1.2 Unicorn Reference Architecture

machine is defined by a recipe that contains the list of software that should be executed within the machine.
The default runtime within Che workspaces are Docker containers. The advantage of using Docker as the runtime
type is that it allows users to customize their runtimes with Dockerfiles [110] that are text documents containing
all the commands a user could call on the command line to assemble a Docker image. Eclipse Che injects various
services into each workspace such as Che plug-ins, SSH daemon, and language services such as JDT core
Intellisense to provide refactoring for Java language projects. Workspaces also contain a synchronizer which
is responsible for synchronizing project files from within the machine with Che long term storage. Since the
workspaces are servers that have their own runtimes, they are collaborative, shareable and portable. This
permits collaborative software development for multiple users and teams to access instantly the same
workspace code base and runtime tools, something highly prioritized by the Unicorn project. Eclipse Che is based
on Docker [3], GWT [111], Orion [112] and RESTful APIs and offers both client and server side capabilities. It also
provides an SDK for authoring new extensions, packaging extensions into plug-ins, and grouping plug-ins into an
assembly. An assembly can either be executed stand alone as a new server, or, it can be installed onto desktops
as an application using included installers.

To develop the Unicorn Cloud IDE plugin, we will make use of the Eclipse Che SDK, described above, to develop
an IDE plugin and extend the Che Server and Workspace Management. Thus, at the IDE level we will author
extensions written in Java and translated into JavaScript Web application, that modify the look and feel, menus,
toolbars and panels of the client when accessing the plugin utilities to receive an Eclipse touch-and-feel
tailored to cloud micro-service design and development. Figure 10 depicts a high-level and abstract overview of
Eclipse Che extended with Unicorn functionality and components within the plugin eco-system. The Unicorn
Cloud IDE plugin mainly focuses in the development, management, deployment and validation of Unicorn-
compliant micro-service cloud applications. Since Unicorn Cloud IDE plugin is developed as a plugin for the
Eclipse Che platform, it operates in a similar manner as Che. This means that it is operable through a web-
browser that downloads the Unicorn IDE along with the Design Libraries. The IDE is organized into two
perspectives (facets), each addressing a specific set of functionalities for different types of Unicorn user roles.

The Development Perspective allows micro-service developers, via the Annotated Source Code Editor to develop
secure, elastic, and privacy-aware cloud applications using the annotative Design Libraries. At the same
perspective and with the usage of Service Graph and Policies Editor application product manager edits the
service graph of the to be deployed application and modify and define the policies and constraints that will
govern its operation. At the Management Perspective application administrators have the ability to view in an
intuitive graphical manner collected metrics and potential security threat incidents at the Monitoring and
Security Dashboard, Manage the lifecycle of a deployed application through the Application Lifecycle
Management view and manage users and cloud provider tokens and credentials at the respected management
views. In addition, the Unicorn plugin provides developers with the capability to search and package (with
assistance from the Unicorn Platform) their cloud applications with OS libraries to deploy multi-cloud execution
environments, a tool currently absent, especially, from the fast-evolving container and unikernel community.

At the Server level, the Unicorn Developer extends the Eclipse Che API to be able to handle and manage Unicorn
workspaces and runtimes. Specifically, through the IDE, the various user roles are able to invoke RESTful API
calls which allow components running in browser IDE communicate with the Workspace Master in the server
and the server with the workspace. Such communication includes injecting annotated source code, policies and
service graph instances from the Development Perspective of the IDE to the runtime within the workspace via
the workspace master. Another important component at the Server level is the Model Translator. For this

35
D1.2 Unicorn Reference Architecture

component Unicorn reuses and extends the comprehensive cloud Application Model of the Eclipse Cloud
Application Management Framework (CAMF), work performed within the context of the EU FP7 co-funded
research project CELAR [17]. CAMF is an official plugin released under Eclipse Foundation [113] that provides
extensible graphical cloud model specification tools over the Eclipse Rich Client Platform (RCP) [114] to design
and describe cloud application topologies. In particular, the CAMF Application Modeler translates graphical
cloud application topology descriptions into OASIS TOSCA files specifying the topology of cloud applications,
along with their management operations. Similarly, Unicorns Model Translator translates the graphical
representation of the topology of cloud application from the IDE into Tosca Model description stored within the
workspace.

Unicorn developer also develops the Unicorn Runtime Stack which is a configuration for the workspace and
contains the runtime recipes that are actually Dockerfiles to setup the Unicorn environment within the
workspace to support the aforementioned concepts.

Figure 10: Unicorn Eclipse CHE Plugin Overview

At the Unicorn Platform layer, the components are further grouped into sublayers based on the functionality
they serve as shown in Figure 8. The Meta-Model Validation sublayer is responsible for the annotations
interpretation and the validation of the provided service graph and policies. At the Compile, Bundling and
Deployment Enforcement sublayer, all deployment-time policies are enforced by instantiating the necessary
Unicorn Agents within the containers, while the Run-time Enforcement sublayer is responsible to enforce
policies at runtime. The Cloud Orchestration sublayer through a Unified API realizes in a high-level, uniform way
all necessary interactions between the Unicorn Platform and the various cloud providers and set-up the
necessary virtual networks.

Figure 11 shows a high-level overview of the orchestration process from the infrastructure to the application
layer along with the key technologies that are used in each layer. Starting from the infrastructure layer, Unicorn
with the Unified API provides a transparent way of accessing infrastructure resources from multiple cloud

36
D1.2 Unicorn Reference Architecture

providers. On the operating system layer, Unicorn capitalizes on the open-source CoreOS [4], a lightweight
library-based operating system embracing the unikernel paradigm and specifically tailored to support
containerized deployments. Unicorn also supports at the OS level Ubuntu Core [115], another lightweight
library-based OS designed to run securely containerized applications. Based on the findings of our survey
regarding the containerized environment [1], Unicorn makes use of the Docker Engine [10], a lightweight and
powerful containerization technology to run containers. However, the container orchestration of Docker Engine,
while sufficient for small deployments, is limited to a single host environment (e.g., one machine). In order to
support the orchestration of large-scale distributed containerized deployments spanning across multiple hosts,
Unicorn makes use of Kubernetes [5], an open-source orchestration tool for containers running on a cluster of
virtual machines. This tool provides the ability to automatically provision and de-provision containerized
applications running on a single cluster but it has two important for our purposes limitations. The first one is
that it doesnt support the ability to (de-)provision infrastructure resources, and second, each Kubernetes cluster
is a relatively self-contained unit typically running in a single data centre or single availability zone of a cloud
provider, making impossible the operation of an application over multiple cloud zones/regions and providers.
To overcome these limitations, Arcadia Orchestrator is used and extended to support the orchestration of
infrastructure resources from multiple cloud providers and to build on top of Kubernetes, a network overlay
across clouds to support a transparent communication among the services of an application in different clouds.

Figure 11: High-Level Unicorn Orchestration

The cornerstone of the Unicorn framework is the Unicorn Core Context Model, hereafter referred to as Unicorn
Context Model, that provides the semantic clarity and defines all needed entities. Unicorn Context Model is a
concrete and detailed model that describes the application, the resources and policies and in general provides
semantic clarity within the project.

The model of Unicorn reuses work done in other research and standardization activities, mainly the Arcadia
Component Metamodel that is one of the foundations of the Unicorn Context Model, the open cloud topology
specifications of OASIS Tosca [8] and the PaaSword Policy [116] and Context-Awareness Models [74].
Information gathered by our research related solution, guidelines, best practices and principles have been taken
also under account for the creation and enhancement of the actual model.

37
D1.2 Unicorn Reference Architecture

The starting point for describing any vendor-neutral, topology-aware cloud applications, is that each application
has a service graph that includes all the micro-services that create the actual application. The service graph is
actually a Direct Acyclic Graph and the micro-services that are used are defined as Nodes, with node being the
minimum part of the service graph. The nodes on a service graph have dependencies and connections among
them and these connections can be implemented using Toscas Topology Template as illustrates in Figure 12.

Figure 12: Tosca Topology Template

For Unicorn compliant cloud applications, the notion of service graphs is similar to the VNF Forwarding Graphs
in TOSCA NFV and software components are similar to VNFs in TOSCA NFV specification [117]. The Arcadia
Context Model used on Unicorn, has already worked on this direction and Unicorns service graphs nodes and
edges are mapped to the underlying service graph specification language expressed in XML. By being compatible
with this specification, TOSCA NFV deployment scripts can be mapped to Unicorn deployment scripts where the
software components that constitute the service graph regard a set of VNFs.

The model also conceptualizes all the parametric information that is needed for the definition of the service
graph, cover if possible challenges that relate to the development and operation of applications developed with
the micro-service paradigm in reconfigurable environments. The Unicorn Context Model is used in all service
lifecycle phases (i.e. development, composition, deployment planning, execution) in order to conceptualize
specific aspects of services that are essential by the Unicorn architectural components. For this reason, Unicorn
Context Model is a multi-faceted and multi-purpose model, with modeling artifacts conceptually grouped in
facets based on the purpose that they support. The facets of Unicorn Context Model are the following:

1. Unicorn Component Model that is used to conceptualize the component at Unicorn as the most
granular executable unit that can be hosted in an execution container. It is based mostly on Arcadia
Component Model but will be extended in order to support the Unicorn micro-service definition
provided at Section 3 [9].
2. Unicorn Service Graph Model that is used to conceptualize complex applications as Direct Acyclic
Graphs that contain multiple components. Arcadia Service Graph Model should be sufficient but it can
be enhanced if needed.
3. Unicorn Cloud Resource and Service Deployment Model that is used for the creation of the
application placement configuration that is used for the deployment of the Service Graph to the cloud

38
D1.2 Unicorn Reference Architecture

infrastructure. This facet of the model is extended by Unicorn in order to include the deployment
policies and are updated in order to represent the IaaS resources and the containerized environments
with the detail level that is needed by Unicorn.
4. Unicorn Service Runtime and Policies Model that is used in order to conceptualize the states of the
application in Unicorn. It is extended in order to include the Runtime policies as well. This facet also
includes monitoring metrics.
5. Unicorn Annotations Model which is the facet of the Context Model that is responsible for the
conceptualization of code-level annotations that can be used by developers during the
implementation of components. The Annotations used in Unicorn and the model describing them are
an aggregation of Arcadia and PaaSword annotations. These annotations are interpreted by the
Annotations Interpreter component.
6. Unicorn Privacy Policy Model that is based on the PaaSword Policy and Context-Awareness Models
in order to allow Unicorn to provide Context-Aware security in the deployed applications. Although
these models are very detailed, minor changes might be needed in order to support the scenarios of
Unicorn.

The Unicorn Context Model uses XSD language recommendation that can be used to formally describe the
elements in an Extensible Markup Language (XML) document. This normative model to be created describes the
component model and is used for the creation of applications that can be added to the Unicorn platform.
Furthermore, it should be clarified as the Unicorn Context Model is a normative model it allows the strict
validation of all model instances that can be produced based on this model. The fully documented XSD model is
under development and will be documented in upcoming deliverables, based on the purpose of each facet of
the model.

39
D1.2 Unicorn Reference Architecture

Figure 13: Unicorn Core Context Model Mapping

4.1 Motivational Example


The following subsections will present in detail what are the responsibilities and main tasks of each components
and what technologies will be involved for their implementation. Interactions between the components and
information and messages exchange flows will also be demonstrated for the following Unicorn coupled
scenarios: i) Application Development and Validation, ii) Application Deployment, iii) Monitoring & Elasticity,
iv) Security & Privacy Enforcement.

To better illustrate the concepts presented in the following subsections, the use-case of a content streaming
application depicted in Figure 14 i.e. a simplistic view on Netflix [118] or Spotify [119] is presented within the
context of Unicorn. From an architectural point of view the application consists of distinct services, each one
achieving a specific task, deployed across multiple availability zones/regions. More specifically, the services
provided by the application include: i) a user-management service that is responsible for the authentication,
authorization and management of users, ii) a search service that is responsible to browse through the catalogues
of the existing content, iii) a streaming service that will transmit chunks of content from the source to the end
user and iv) a web-based front-end service that will be the UI the user will use to have access to the previously
mentioned services.

40
D1.2 Unicorn Reference Architecture

Figure 14: Content Streaming Cloud Application

Using Unicorn, the application developers would use the Unicorn Cloud IDE plugin, and within its Development
perspective, they could write the source code of the application enriched with annotations provided by the
Unicorn Annotation Design Libraries. On the Service Graph and Policies Editor of the Deployment Perspective,
product manager would define policies (privacy, runtime and deployment-time) that will govern the entire
application during its whole lifecycle. Table 2 below summarizes some of the policies expressed via annotations
or through the service graph and policy editor.

Table 2: Policies for Content Streaming Application

Policy Rule From Design Description


Library
Keep streaming service latency below 300ms Elasticity & This rule guarantees the desired QoS for
Monitoring streaming content
Monitor how much time is needed by the Monitoring This policy captures the time spent by
search service to return a catalog to the end the search service
user
Restrict streaming content from locations in Privacy This rule restricts the movement of data
Germany to other locations from Germany to the outside world.

Encrypt streaming content using AES-128 Security This policy provides an encrypting
algorithm mechanism to secure streaming content
Set Operational budget = $100 / hour Elasticity This rule sets an upper limit on how
much may be spent for cloud resources

4.2 Cloud Application Development and Validation


Cloud application development, within the context of Unicorn, is being realized by using the Application Design
Libraries that provide the means to Cloud Application Developers to create secure-by-design, elastic-by-design
and private-by-design applications as illustrated in Figure 15. Application Design Libraries can be envisioned as

41
D1.2 Unicorn Reference Architecture

a set of Java libraries that provide the programmer a well-known set of useful facilities. The Design Libraries are
part of the Unicorn Plugin and are used as annotations through the Annotated Source Code Editor component
of the Unicorn Cloud IDE Plugin, to annotate parts or the entirety of the source code of a Unicorn compliant
project.

The Cloud Application Product Manager uses the Service Graph & Policy Editor component to define the design-
time and run-time policies governing the cloud application and to compose and extract the topology of the
application to be deployed. This component uses an intuitive graphical user interface that allows drag n drop
actions to compose/edit the service graph which consists of application components developed by the
developer and their inter-dependencies and relations which are stored in a Component Repository. Under the
hood, the graphs nodes and edges map to the underlying service graph specification language, expressed in an
Extensible Mark-up Language (XML), namely the OASIS TOSCA [8] specification extended as part of the Unicorn
project to include run-time, privacy and deployment policies.

Then, the Cloud Application Administrator by using the Available Cloud Offerings component at the
Management Perspective sublayer of the Cloud IDE, is able to search through the available cloud offerings and
manually (from a dropdown list and search filters) or automatically the component selects (by solving a
constraints satisfaction problem based on placement policies) the best fitting offerings for the deployment of
the application. This component retrieves information about cloud offerings from an administrative Unicorn
database.

The next step of the process includes the Product Manager initiating the Packaging process. This process is
conducted by the Package Manager component that receives as input the annotated source code and the
enriched XML description of the service graph and creates the Unicorn deployment artefact and forwards it to
the Unicorn Platform. The Unicorn artefact is a Cloud Service Archive (CSAR) file, that is actually a zip file
containing at least two directories, the TOSCA-Metadata directory and the Definitions directory. Beyond that,
other directories may be contained in a CSAR. The TOSCA-Metadata directory contains metadata describing the
other content of the CSAR. This metadata is referred to as TOSCA meta file. The Definitions directory contains
one or more TOSCA Definitions documents (file extension .tosca). These Definitions files typically contain
definitions related to the cloud application of the CSAR. In addition, CSARs can contain just the definition of
elements for reuse in other contexts. Package Manager, Annotated Source Code Editor and Service Graph &
Policy Editor are the components that form the Development Perspective of the Cloud IDE Plugin.

The Unicorn Deployment Artefact through the Unicorn Unified API arrives at the Unicorn Platform layer and
passes through the Metamodel Validation sublayer. The first component of the sublayer is the Annotations
Interpreter which is responsible for the introspection of application archive that is uploaded from Unicorn Cloud
IDE Plugin or from the code edited on the Unicorn Cloud IDE Plugin. More specifically, the mechanism introspects
the Java bytecode and determines whether this application contains Unicorn annotations or not. The
annotations in Java are syntactic metadata that can be added to the code, following specific standards of the
Java language(JSR 175 [60], JSR 250 [61], JSR 269 [62] and JSR 308 [120]).

A parser undertakes the task to transform the arguments of the annotations to format needed by the Run-Time
Policies Enforcement and the Privacy Enforcement components and provide also the attributes of the rules.
After the introspection process the identified annotations shall be provided to the Unicorn UI for further

42
D1.2 Unicorn Reference Architecture

configuration and also to the enforcement components with the appropriate rules and attributes needed for
the evaluation.

Moving forward with the process, validations take place on both the service graph and the user-defined
policies.

For the creation of a successful platform that allows the design, deployment and management of elastic and
multi-cloud services, it is important for Unicorn to define a concrete and detailed model that describes the
application and also provides semantic clarity within the project. The starting point for describing any Unicorn
compliant application is that each application has a service graph that includes all the micro-services that create
the actual application. The model also conceptualizes all the parametric information that is needed for the
definition of the service graph, and use XSD [121] language recommendation that can be used to formally
describe the elements in an Extensible Mark-up Language (XML) document. This normative model that is created
describes the component model and is used for the creation of applications that can be added to the Unicorn
platform.

For the definition of the model that is used by Unicorn applications, work already done in the area is exploited.
The Arcadia Component Metamodel [122] is one of the foundations of the Unicorn Model while open cloud
topology specifications like OASIS Tosca is used for the enhancing the of portability, elasticity and security
aspects on the model. The principles and best practices of component design and micro-services deployment
are also taken under consideration and become part of the model.

The Service Graph Validation Module is responsible to determine the well-formedness and validity of the
service graphs that have been defined. Each service graph is described using a human-readable language like
XML or YAML. The well-formedness and validity of a service graph is evaluated using the XSD schema that
represents the normative presentation of Unicorn model.

The Policies Validation Module is responsible to determine the well-formedness and validity of the policies that
have been defined using the Service Graph and Policy Editor. The well-formedness of a policy is determined on
the basis of a set of constraints that form the policy using the XACML [75] policy definition and evaluation model.
The validity of a newly-created (or updated) policy is determined on the basis of its potential relations with
other, already defined, policies. An interface provided by the Policies Validation Module is used to accept the
sets of Unicorn policies and indicates if there are contradictions or subsumptions in it.

Each time a user constructs a new policy, or updates an existing one, the Policy Validation Component intervenes
in order to evaluate whether the newly created or modified policy is valid (i.e. it is not contradictory, it does not
subsume or contradict with any existing policies and it has at least one enabler associated with each one of its
attributes).

It is important to clarify that policies are created with the combination of rules and the rules of the policies are
defined at component level. Also, three different types of policies that are used in Unicorn are distinguished.

1. Deployment policy A policy that constrains the service graph creation during initial placement. It defines
the resources needed for the application at component level.
2. Runtime policy A policy that includes the constraints that have to be enforced during runtime and
affects the way that the application scales.

43
D1.2 Unicorn Reference Architecture

3. Privacy policy A policy that includes authorization constraints for the application and is using the XACML
model.

Upon successful validation, the deployment process continues normally. In case of validation failures and errors
the process stops and the Cloud Application Administrator receives a notification.

Applying this flow to the content streaming use-case, the Cloud Application Developers of the content streaming
application, would use the annotation libraries to define deployment, runtime and privacy policies, shown in
Table 2.

Specifically, the developers could use annotations to:

1. Achieve a high QoS on streaming service by monitoring the latency of the streaming service in order to
adapt the streaming service to meet current demand, but within their budget.
2. Protect the streaming content using an encryption algorithm
3. Restrict data movement from certain locations, such as from content from Germany to the outside
world.

Figure 15: Cloud Application Development and Validation

44
D1.2 Unicorn Reference Architecture

4.3 Application Deployment


Figure 16 illustrates the Application Deployment flow. From the flow on Section 5.1.1, the Cloud Application
Developer has used annotations to develop the cloud application and the Cloud Product Manager has initiated
the process of the deployment. Provided that no validation failures occur at the Meta-Model Validation sublayer,
the deployment process continues at the Unicorn Platform layer and more specifically at the Compile, Bundling
and Deployment Enforcement sublayer.

The Deployment Policies Manager component is responsible for the enforcement of the deployment policies.
It is a component that works together with the Application Placement Optimization in order to make the ideal
service graph placement by imposing the deployment policies that describe the service graph. This component
is working by actually translating the policies to proper format and feeds them to the constraint satisfaction
solver of the Application Placement Optimization.

Application Placement Optimization component is working as a constraint satisfaction solver and


recommendation engine that suggest the optimized service graph parameters based the description of the
application package, the existing resources, the service graph policies and real-time facts. For the aggregation
of these results, Drools [76], a business rule management system (BRMS), is used. Drools uses reasoning based
on an inference engine that utilizes forward and backward chaining inference of rules, more correctly known as
a production rule system [123]. The optimisation is actually a proactive adjustment of the running configuration
and is using facts, like the available resources in the multi-cloud execution environments, and the placement
constraints per each component defined in the deployment policies are taken under consideration of the
creation of the optimal service graph. The existing policies are provided by the Deployment Policies Manager,
while the resources information is provided by the Resource Management component. In this initial
architecture, the vision is that any re-configurations of the deployment plan, are also using the same constraint
satisfaction solver mechanism, but are based on measurements that derive from the monitoring components
and take under consideration the runtime policies that are provided and enforced by the Run-time Policies and
SLA Enforcement component.

The next step of the process is the preparation and bundling of the application that will be deployed on a Unicorn
supported execution environment. The Unicorn component that shoulders this task is the Container Bundler.
The application is represented by a service graph that contains multiple components. Each component is treated
like a micro-service and is bundled to a container using a container engine like rkt [101] or Docker Engine [10]
and CoreOS Container Linux [4] as operating system for all containers. Applications that are targeted for this
containerization process are both Java and python apps. In addition, at this step, the bundler installs the Unicorn
agents that will be needed, like the monitoring agent and the security service. For each of the containers that
will be created, a unique identifier is produced (image reference) and is provided back to the model instance of
the service graph. The container images are forwarded to the Resource Manager component in order to proceed
on the deployment, based on the instructions of the Application Lifecycle Manager Component.

The Resource Manager provides a common and uniform view to the Application Lifecycle Management of the
heterogeneous set of resources, by implementing a common interface for creation, configuration, management
and removal of virtual resources. For IaaS frameworks owned by the user, the Resource Manager also takes care
of optimal usage of the underlying infrastructure, by supervising the operation of the virtualization technologies.
Resources under the control of the Resource Manager may include programmable physical computing, storage
and networking equipment. At technical level, Resource Manager is built on top of Kubernetes container

45
D1.2 Unicorn Reference Architecture

manager and Arcadia orchestrator. This way Resource Manager provides Unicorn with resource aware
orchestration capabilities that allows the initial deployment and management of service graph components as
containerized microservices in registered resources.

Finally, Network Overlay Manager is responsible for the creation, management and configuration of the
network between the components of the defined service graph and is used by the application that will be or has
been deployed. The vision of the project is to create this functionality using Kubernetes and the
programmatically management of Software-Defined Networking (SDN) at least on Openstack IaaS based clouds.

Figure 16: Application Deployment

46
D1.2 Unicorn Reference Architecture

4.4 Monitoring and Elasticity


Figure 17 presents the concepts of Monitoring and Elasticity through the respected flow. In this flow, the
application consists of a service running on a container C1 on a cloud provider.

Monitoring agents that are installed within the containerized environment level publish metrics to the
Monitoring and Analytics component part of the Run-time Enforcement sublayer of the Unicorn Platform. The
role of the Monitoring and Analytics component is to collect, store and analyse monitoring data regarding
resource utilization from the underlying virtual infrastructure (e.g., compute, memory, network) and deployed
cloud application behaviour from tailored application-level metrics (e.g., throughput, active users) to detect and
promptly notify cloud consumers of potential performance inefficiencies, security risks and exhibited recurring
customer and resource behaviour patterns. Monitoring is envisioned to be provided to consumers as a service
(MaaS), thus, removing from consumers the overhead of deploying and maintaining in-house monitoring
infrastructure, as well as, allowing for the monitoring process to be decoupled from cloud provider
dependencies so as for monitoring to not be disrupted and require significant amount of configuration when a
cloud service must span across multiple availability zones and/or cloud sites. Although centrally accessible by
multiple tenants, the Unicorn Monitoring and Analytics Service, internally processes and store monitoring data
in a distributed fashion. To reduce monitoring costs, which are billable and noticeable in distributed topologies,
data movement across cloud sites, and the intrusiveness on containerized deployments, the Monitoring and
Analytics component provides low-cost adaptive monitoring techniques found capable of reducing the
monitoring footprint, energy consumption and the velocity of data disseminated over the network by adapting
the periodicity of the metric collection and dissemination process.

Collected monitoring data are then fed to the Intelligent Decision-Making Module (IDMM) component which
offers real-time adaptation based on conditions and high-level policy constraints given by the Cloud Application
Administrator. The role of this component is to decide the most efficient configuration for the execution of the
cloud application, by continuously evaluating its behaviour, offerings by multiple cloud providers and user
requirements and policies. IDM module is part of a MAPE (Monitor-Analyse-Plan-Execute) control loop, making
use of intelligent semi-supervised scheduling algorithms for the optimal placement of virtual machines and
containers across multiple availability zones and/or cloud sites, realizing the heterogeneity among cloud
providers and their capabilities. IDM module is envisioned to be enclosed as an independent framework allowing
for state-of-the-art elasticity controllers to utilize these techniques to improve their control mechanisms and
take more reliable scaling decisions depending on various user-defined optimization strategies.

Within the Unicorn context the realization of the real-time adaptation may take two paths, as shown in Figure
17 steps 6-a and 6-b. The first path simply adds a new container C2 within an already existing VM, while the
second path describes a more complex situation in which the currently provisioned resources have been
exhausted, thus incapable to host a new container. To be able to scale correctly, the Resource Manager,
according to the plan by the IDMM, has to create a new VM first and then add the new container C3. The
Network Overlay Manager component is then responsible to create the necessary virtual networking
configurations so that the newly created C4 container which resides in a new VM can communicate with the
rest of the deployed services.

Using the example use-case of the content streaming application, envision the scenario in which traffic in one
availability region/site (e.g. EU-West-1) increases sufficiently enough that an increase in latency is observed
between the end-users and streaming service. In this case, Unicorn IDMM evaluates this situation and decides

47
D1.2 Unicorn Reference Architecture

that two new streaming services are needed to cope with the increasing demand. For this, IDMM produces a
new plan within the operational budget. This plan includes firstly the deployment of a new container to host the
streaming service and secondly the instantiation of a new VM to host the new service container on another
cloud provider but in the same region. As soon as the VM is instantiated, the Resource Manager pushes the
service container to the new VM and Network Overlay Manager updates the Virtual Network with the specifics
of the new application container.

Figure 17: Monitoring & Elasticity Flow

48
D1.2 Unicorn Reference Architecture

4.5 Security and Privacy Enforcement


One of the major features of the Unicorn framework is the ability to embed security and privacy concepts within
the cloud application during development time. This is achieved by using the security and the privacy Unicorn
design libraries, through the respected annotations.

Figure 18 illustrates the flow of information and data during a scenario the Security Enforcement mechanism is
activated. In this scenario, a Unicorn enabled cloud application is already deployed with annotations on its
source code from the security design library defining security policies for the application. Because of the
existence of security annotations, within the deployed containers have been installed the Security Service and
at the Run-time Enforcement sublayer the components Security Enforcement, Vulnerability Assessment and
Monitoring & Analytics Service have been instantiated.

The Security Service component is a specially designed service that maps to a set of various pieces of security
software that include an Intrusion Detection System (IDS), a firewall and a security scanner. The component
constantly monitors incoming and outgoing network traffic and propagates the collected data to the Monitoring
and Analytics component, through the shared Monitoring API. The Monitoring & Analytics component is
extended with regression based performance models from systematic measurements, making it able to predict
potential malicious behaviour such as DDoS attacks and an estimate of the attacks strength.

In the event of a potential security policy violation, a security incident is created and forwarded to the Security
Enforcement component. This component is responsible with performing the appropriate level of data
encryption, firewall and IDS configuration, etc., where specified by annotations. It makes use of a variety of tools
such as intrusion detection systems (IDS), firewalls and encryption persistency mechanism that are part of the
Security Service component. These tools enforce the security policies selected by Cloud Application Developer.
At the same time, the security incidents are reported to the Monitoring and Analytics Dashboard on the Cloud
IDE Plugin layer and the Cloud Application Administrator receives a real-time notification about the incident by
the Real-Time Notification component of the Unicorn Platform.

The Vulnerability Assessment component co-operates with the Security Service to prepare a risk estimation
based on the characteristics of the cloud infrastructure, the type of the application and the degree of the
required protection. The component makes use of a security scanner (e.g. Nmap) to check if the deployed
application conforms with the security policies enforced by Cloud Application Developer through the use of
annotations.

49
D1.2 Unicorn Reference Architecture

Figure 18: Security Enforcement

To demonstrate this flow, suppose there is an already multi-cloud application deployed and the services
consisting the application are exchanging sensitive information between different availability zones and regions.
The Cloud Application Developer has used the privacy annotations from the respected design library to enforce
policies such as, geolocation constraints and exchanged of sensitive data between authorized entities and within
the containerized environment has been installed the Security Service that makes use of the XACML standards
to evaluate access request.

50
D1.2 Unicorn Reference Architecture

Figure 19: Privacy Enforcement

Access requests are forwarded to the Privacy Enforcement Enabler component, that is responsible to check
whether data access or movement is compatible with the privacy policies specified in the Data Privacy &
Constraints and Policies repositories. Whenever a violation of privacy policies occurs, the Privacy Enforcement
Enabler restricts access between services of the affected geographical regions and applies hashing and
deterministic encryption to protect sensitive data exchange. At the same time, Cloud Application Administrators
get notified about the incident through the Management Perspective of the Cloud IDE Plugin and more
specifically through the Monitoring and Security Component.

In the use-case of content streaming application, consider the case that a request to the streaming service for
content from Germany, is originated outside Germany. This request violates the privacy policy defined by the
streaming application owner and therefore the Privacy Enforcement enabler is going to deny the streaming of
that content.

In the case of security enforcement, the security service installed within the container layer use-case of security
constantly intercept inbound and outbound traffic and send monitoring data at the Monitoring & Analytics

51
D1.2 Unicorn Reference Architecture

component to detect and prevent potential attacks at the applications ecosystem. Also, before the final build
of the streaming application, the rule to encrypt streaming content using AES-128 algorithm, is interpreted and
Security Enabler injects code implementing this functionality to the source code of the streaming service class.

52
D1.2 Unicorn Reference Architecture

5 Unicorn Use-Cases
In this chapter, the use cases for the Unicorn Framework to be delivered by the Unicorn project are identified,
specified, and presented in a standardized scheme. The use cases are derived from the set of requirements
summarized in Section 3 of this deliverable. References assure the traceability between the use cases the
requirements, simplifying the validation of the final architecture at a later stage. Flows enrich the specification
by illustrating their dynamic aspects as a sequence of actions, including alternatives and exceptions where
applicable.

Figure 20: Unicorn Use Case UML Diagram

Use cases are employed as a mean to transform the identified requirements into an exhaustive set of system
functionalities to be provided by the Unicorn Framework and are depicted in the UML diagram on Figure 20.

53
D1.2 Unicorn Reference Architecture

UC.1: Define runtime policies and constraints

Table 3: Define runtime policies and constraints use-case

Use Case ID UC.1

Name Define runtime policies and constraints

Description/Actor The Cloud Application Owner will be able to express runtime policies and constraints
according to the cloud application properties.
Requirements FR.1

Precondition Policy Editor available

Flow

Step Action

Step 1 The Cloud Application Owner opens the Policy Editor to define runtime policies and
constraints.
Step 2 The business logic of a cloud application is expressed in the form of runtime policies
and constraints.
Result/Post A runtime policy is defined which will be validated during deployment process.
Condition
Exceptional Flow

Exception 1 The runtime policy cannot be supported by the Policy Manager and the Cloud
Application Owner gets notified.

UC.2: Develop Unicorn-enabled cloud application

Table 4: Develop Unicorn-enabled cloud application

Use Case ID UC.2

Name Develop Unicorn-enabled cloud application

Description/Actor The Application Developer will develop a Unicorn enabled cloud application using
provided development support.
Requirements FR.1, FR.22, FR.23

Precondition Annotated Source Code Editor, Service Graph Editor, Policy Editor, Annotations Design
Libraries are available, Runtime policies defined
Flow

54
D1.2 Unicorn Reference Architecture

Step Action

Step 1 The Application Developer develops code enriched with annotations mapping to runtime
enforcement policies and constraints (e.g., security, privacy, elasticity, monitoring).
Step 2 The Application developer creates the Service Graph of the application.

Step 3 The Application annotated code and the Service Graph are registered to the appropriate
repository.
Post condition/ A Unicorn-enabled cloud application is developed and its design artefacts are stored in
Result the system repositories.
Exceptional Flow

Exception 3-a Registration of application annotated code in the Microservice repository failed.

Exception 3-b Registration of Service Graph in the respective repository failed.

UC.3: Package Unicorn-enabled cloud application

Table 5: Package Unicorn-enabled cloud application

Use Case ID UC.3

Name Package Unicorn-enabled cloud application

Description/Actor The Cloud Application Product Manager should be able to package any executable in a
way that is comprehensive by the cloud deployment artefacts of Unicorn.
Requirements FR.4

Precondition Annotated source code of the cloud application, Service Graph description of the
application, Package Manager should be available through the plugin.
Flow

Step Action

Step 1 The Cloud Application Product Manger selects the cloud application to be packaged in
the Package Manager.
Step 2 The Package Manager bundles service graph description and the annotated source
code into a deployment assembly.
Post The unicorn deployment assembly is given to the Unicorn Platform for deployment.
Condition/Result
Exceptional Flow

Exception 1 The Package Manager fails.

55
D1.2 Unicorn Reference Architecture

UC.4: Deploy Unicorn-compliant cloud application

Table 6: Deploy Unicorn-compliant cloud application

Use Case ID UC.4

Name Deploy Unicorn-compliant cloud application

Description/Actor The Cloud Application Administrator will submit the application for deployment to
available cloud offerings.
Requirements FR.5, FR.11

Precondition A valid unicorn deployment assembly exists and valid cloud offerings exist. Valid
Unicorn deployment artefacts exist.
Flow

Step Action

Step 1 The Policy Manager instantiates the required runtime enablers to enforce runtime
policies and decides the necessary unicorn agents to be installed.
Step 2 The Application Placement Optimization Module automatically derives a (near-)
optimal application placement plan based on the user defined policies and constraints.
Step 3 Containers are created and the necessary Unicorn agents are installed within the
container.
Step 4 Based on the optimal placement plan a new VM is instantiated.

Step 5 The container is pushed to the respective VM.

Post A Unicorn-compliant cloud application is deployed.


Condition/Result
Alternate Flow

Step 2 The Application Placement is realized based on resource requirements and cloud
offerings, defined by the Cloud Application Administrator. Go back to Step 3.
Step 4 Based on the near optimal placement plan the container is pushed to an existing VM.

Exceptional Flow

Exception 1 Failure to instantiate runtime enablers.

Exception 3 Failure to install agents within the containers.

56
D1.2 Unicorn Reference Architecture

Exception 4 The new VM cannot be instantiated.

UC.5: Manage the runtime lifecycle of a deployed cloud application

Table 7: Manage the runtime lifecycle of a deployed cloud application

Use Case ID UC.5

Name Manage the runtime lifecycle of a deployed cloud application

Description/Actor The Cloud Application Administrator will manage the runtime lifecycle of deployed cloud
applications including management of state and runtime aspects of the application as
driven by the Unicorn Core Context Model. The management of the runtime lifecycle of
the application is performed through the Unicorn GUI.
Requirements FR.10

Precondition Deployed Cloud Application on respective cloud execution environment.

Flow

Step Action

Step 1 The Cloud Application Administrator selects the application to be managed from the list
of all applications that have been created by Cloud Application Developers.
Step 2 The Cloud Application Administrator selects the management functionality he wants to
perform, e.g. (un-)deployment, starting, stopping or pausing of the application.
Step 3 The Application management process is initiated.

Step 4 The progress is displayed to the Cloud Application Administrator.

Post condition/ The chosen management action is performed for the selected cloud application. The
Result Cloud Application Administrator gets notified through the dashboard.

Exceptional Flow

Exception 3 The lifecycle management action fails.

Exception 4 The system displays an error message informing about the inconsistent state of the
application to the user.

57
D1.2 Unicorn Reference Architecture

UC.6a: Manage privacy preserving mechanisms

Table 8: Manage privacy preserving mechanisms into design time

Use Case ID UC.6a

Name Manage privacy preserving mechanisms into design time

Description/Actor A Cloud Application Administrator should be able to register and manage privacy
constraints that are relevant to the execution context of the cloud application
Requirements FR.22

Precondition A valid formal service graph model exists.

Flow

Step Action

Step 1 Using the Policy Editor, the service graph model is loaded and the available ports are
visualized.

Step 2 The Cloud Application Administrator provides security constraints per Microservice.

Step 3 Security constraints are saved and forwarded to the Policy Manager.

Post The Policy Manager knows about security constraints defined by the Cloud Application
Condition/Result Administrator.

Alternate Flow

Step 1 A service graph is already deployed.

Step 2 The Policy Editor opens the deployed service graph and loads the existing security
constraints. The Cloud Application Administrator alters the constraints and saves them.
Step 3 Privacy constraints are saved and forwarded to the Execution Manager.

Exceptional Flow

Exception 1 One or more privacy constraints cannot be applied. A respective exception is thrown.

58
D1.2 Unicorn Reference Architecture

UC.6a: Manage privacy enforcement on runtime

Table 9: Manage privacy enforcement on runtime

Use Case ID UC.6b

Name Manage privacy enforcement on runtime

Description/Actor The Cloud Application Administrator will be able to enforce privacy policies during
runtime.

Requirements FR.22

Precondition Specific privacy enforcement policy enabler is implemented, privacy constraints are
provided to the Policies Manager.

Flow

Step Action

Step 1 Using Policies Editor, the Cloud Application Administrator configures the attributes and
the values for the policy.

Step 2 The Cloud Application Administrator selects the proper enabler.

Post The enabler is integrated properly and the privacy policy is enforced.
Condition/Result

Alternate Flow

Step 2 A proper enabler doesnt exist so Cloud Application Administrator is not able to select
it.
Step 3 The proper enabler is implemented and added to Unicorn by the Unicorn Developer,
so Cloud Application Administrator is now able to continue with the selection of the
fitting enabler.
Exceptional Flow

Exception 1 The mismatching of the enabler leads to failure of proper enforcement of the privacy
policies.

59
D1.2 Unicorn Reference Architecture

UC.7a: Manage security enforcement mechanisms

Table 10: Manage security enforcement mechanisms

Use Case ID UC.7a

Name Manage security enforcement mechanisms

Description/Actor The Cloud Application Administrator registers and manages new implementations or
extensions or customizations (new detection rules) of security enforcement mechanisms
for runtime monitoring, detection and labelling of abnormal and intrusive cloud network
traffic behaviour.
Requirements FR.23

Precondition Security Enforcement annotations are created.

Flow

Step Action

Step 1 The Cloud Application Administrator enters security mechanism metadata and locations
for binary code and configuration files.

Step 2 The security enforcement mechanism is installed in the Security Enforcement Enabler
along with metadata, binary code and configuration files.

Post condition/ The security mechanism implementation is installed and can be used by the Security
Result Enforcement Enabler.

Alternate Flow

Step 1-a Update security mechanism metadata and/or locations.

Step 1-b Delete security mechanism metadata.

Step 2-a Update security mechanism in Security Mechanism repository.

Step 2-b Delete security mechanism in Security Mechanism repository.

60
D1.2 Unicorn Reference Architecture

Exceptional Flow

Exception 1 Registration/Update/Deletion of security mechanism in the Security Enforcement


Enabler failed.

UC.7b: Manage security enforcement mechanisms (enabler enforces security/privacy constraints)

Table 11: Manage security enforcement mechanisms (enabler enforces security/privacy constraints)

Use Case ID UC.7b

Name Manage security enforcement mechanisms (enabler enforces security/privacy


constraints)
Description/Actor The Cloud Application Administrator enforces security constraints by security
enforcement mechanisms.

Requirements FR.23

Precondition Security Enforcement Enabler is implemented.

Flow

Step Action

Step 1 The Security Enforcement Enabler based on the corresponding annotations selects the
appropriate security enforcement tool.

Step 2 The chosen security enforcement tool (e.g. Snort, Nmap/Nessus) is installed in the
OS/Containerized environment where the application itself is installed.

Post condition/ The chosen security enforcement tool is installed by the Security Enforcement Enabler.
Result
Alternate Flow

Step 2 The already installed security enforcement tools configuration is changed by the
Security Enforcement Enabler based on a newly identified vulnerability of the
application.
Exceptional Flow

Exception 1 Installation of security enforcement tool failed.

61
D1.2 Unicorn Reference Architecture

UC.8: Monitor application behaviour and performance

Table 12: Monitor application behaviour and performance

Use Case ID UC.8

Name Monitor application behaviour and performance

Description/Actor The Cloud Application Administrator should be able to monitor the behaviour and
performance of his/her organization's deployed cloud applications.

Requirements FR.7

Precondition Monitoring agent exists on cloud application, cloud application is successfully deployed.

Flow

Step Action

Step 1 The Unicorn platform, through the monitoring enabler, interprets the monitoring
requirements, configurations and constraints of the user and automatically instantiates
and initiates monitoring of the newly deployed application.

Step 2 Monitoring data, capturing the application behaviour and performance of the underlying
platform, are immediately stored and made available through the Unicorn high-
performance and distributed data indexing scheme.

Step 3 Through the Unicorn Dashboard, real-time monitoring data are accessed by the user in
a graphical form.

Step 4 Through the Unicorn Dashboard, users can formulate (continuous) monitoring queries
to access and trawl historical and/or aggregated monitoring data from the analytics
service.

Post condition/ Real-time and historic monitoring data, capturing the application behaviour and
Result performance of the underlying platform, are collected and made available to Unicorn
interested entities (e.g., Cloud Application Admin).

Alternate Flow

Step 1 At any given time after monitoring is successfully established, the user may request to
adapt a deployed cloud applications monitoring process configuration (e.g., adapt
monitoring periodicity, granularity, etc.).

Exceptional Flow

62
D1.2 Unicorn Reference Architecture

Exception 1 Monitoring requirements or configuration is not valid. Monitoring will not be


instantiated and the analogous erroneous status and message will be produced.

Exception 2 The monitoring system is not receiving any data or the monitoring data collection service
has stopped running.

Exception 4 Monitoring query or monitoring data is not in the expected format.

UC.9: Adapt deployed cloud applications in real time

Table 13: Adapt deployed cloud applications in real time

Use Case ID UC.9

Name Adapt deployed cloud applications in real time

Description/Actor Cloud Application Administrators should be able to adapt their Unicorn-enabled


application in real-time.

Requirements FR.9

Precondition Deployed Unicorn-enabled cloud application, Decision Making module, Monitor &
Analytics service, Elasticity policies and constraints are defined.

Flow

Step Action

Step 1 The Unicorn system detects a violated runtime policy during continuous monitoring
analytics.

Step 2 The Decision-Making module provides to the system a new plan to scale resources (e.g.
add/remove containers, add/remove VMs).

Step 3 The new scaling plan is recorded in the log files and the Cloud Application Administrator
uses the dashboard to review the log files about the new redeployment plan.

Step 4 The Resource Manager allocates and deallocates resources according to the plan.

Step 5 The system informs the Cloud Application Administrator about the success of the new
plan.

Post condition/ Adapted application according to the objectives of the Application Administrator.
Result

63
D1.2 Unicorn Reference Architecture

Exceptional Flow

Exception 4 The system fails to allocate/deallocate the necessary resources. The Cloud Application
Administrator is informed.

UC.10: Get real-time notifications about security incidents and QoS guarantees

Table 14: Get real-time notifications about security incidents and QoS guarantees

Use Case ID UC.10

Name Get real-time notifications about security incidents and QoS guarantees

Description/Actor The Cloud Application Administrator (and other interested entities) could receive real-
time notifications for security incidents, application abnormal behaviour and QoS
violations.

Requirements FR.8

Precondition Security and Monitoring agents exist within the cloud application, the cloud application
is successfully deployed. Monitoring metrics are specified using the respected
annotations.

Flow

Step Action

Step 1 The Unicorn platform, through the security enabler, interprets the security and QoS
requirements, configurations and constraints of the user and instantiates and initiates
security of the newly deployed application.

Step 2 Monitoring data, capturing the application behaviour and performance of the underlying
platform, is continuously accessed from the underlying Monitoring Service.

Step 3 The Unicorn platform analyses and assesses the obtained data to detect abnormal
application behaviour, violation of QoS agreement(s) and security incidents.

Step 4 The Unicorn platform notifies the Cloud Application Administrator about abnormal
application behaviour, violation of QoS agreement(s) and security incidents.

Step 5 Through the Unicorn dashboard, the Cloud Application Administrator is presented with
graphical or textual analysis of the timeframe of each incident together with hint(s) for
the set of actions to take to resolve the cause of each incident.

Step 6 The user decides to take no actions and confirms the notification.

64
D1.2 Unicorn Reference Architecture

Post condition/ In the case of abnormal application behaviour, violation of QoS agreement(s) and a
Result security incident, the Cloud Application Administrator is notified in real-time through the
Unicorn Dashboard and is advised to take further actions.

Alternate Flow

Step 3 At any given time, or after a violation, the Cloud Application Administrator decides to
alter the security enforcement requirements or configuration of his/her application.
Step 6 The Cloud Application Administrator decides that the criticality of the security or QoS
violation(s) are severe, terminating the application deployment on the current cloud
environment provider(s) in order to re-deploy on a different environment.

Exceptional Flow

Exception 1 Security requirements or configuration is not valid. Security will not be instantiated and
the analogous erroneous status and message will be produced.

UC.11: Perform deployment assembly validation

Table 15: Perform deployment assembly validation

Use Case ID UC.11

Name Perform deployment assembly validation

Description/Actor The Cloud Application Tester will be able to check the correctness of a Unicorn
deployment artefact.

Requirements FR.6

Precondition A Unicorn deployment artefact is ready for deployment.

Flow

Step Action

Step 1 The deployment assembly is submitted to a validation endpoint.

Step 2 The correctness of the annotations is checked.

Step 3 The availability of proper handlers per annotation is checked.

Step 4 The correctness of the service graph is checked.

65
D1.2 Unicorn Reference Architecture

Result/Post A validation report about performed annotation and service graph checks is available.
Condition

Exceptional Flow

Exception 1 Valid Annotations are not identified.

Exception 3 Proper Handlers are not identified.

Exception 4 Service Graph is invalid.

UC.12: Perform security and benchmark tests

Table 16: Perform security and benchmark tests

Use Case ID UC.12

Name Perform security and benchmark tests

Description/Actor[A1] The Cloud Application Administrator needs to perform security tests and benchmarks
to detect security threats, privacy breaches and measure the risk that multi-cloud
applications exhibit.

Requirements FR.24

Precondition Source code and binary available for testing, a repository of known vulnerabilities is
available and populated.

Flow

Step Action

Step 1 The source vulnerability assessment is executed to identify source code flaws that
may lead to potential vulnerabilities during execution.

Step 2 Binary-level assessment is executed to identify vulnerabilities in an execution


environment which resembles the operational one.

Step 3 The result of both vulnerability assessments will trigger the calculation of risks that
are associated with each cloud application type.

Step 4 The Unicorn Dashboard informs the Cloud Application Administrator for risk
assessment results.

66
D1.2 Unicorn Reference Architecture

Post condition/ Security threats and privacy breaches are detected and the risk that multi-cloud
Result applications exhibit is measured.

UC.13: Manage cloud provider credentials

Table 17: Manage cloud provider credentials

Use Case ID UC.13

Name Manage cloud provider credentials

Description/Actor The Cloud Product Manager will be able to register and manage multiple cloud
provider credentials.

Requirements FR.2

Precondition Valid credentials for each provider exist.

Flow

Step Action

Step 1 The Cloud Product Manager uses these credentials to register them in an appropriate
form within the Unicorn dashboard.

Step 2 Cloud provider credentials are stored within the credentials repository in Unicorn.

Post Access credentials are securely managed and stored in Platform Administration
Condition/Result Repository. Access credentials are available during deployment process.
Alternate Flow

Step 1-a The Cloud Product Manager deletes saved credentials from the Unicorn Platform using
the Cloud IDE plugin.
Step 1-b The Cloud Product Manager modifies saved credentials using the Dashboard from the
Cloud IDE plugin.
Exceptional Flow

Exception 2 Error during the registration process.

67
D1.2 Unicorn Reference Architecture

UC.14: Search for fitting cloud provider offerings

Table 18: Search for fitting cloud provider offerings

Use Case ID UC.14

Name Search for fitting cloud provider offerings

Description/Actor The Application Product Manager needs to be able to search for the cloud providers
that can be used in order to deploy the cloud application.

Requirements FR.3

Precondition Cloud IDE plugin, Unified API, A valid service graph model and several valid cloud
provider models should exist in order to perform matchmaking.

Flow

Step Action

Step 1 The Cloud IDE plugin through the respected dashboard displays a list of all available
cloud offerings which are exposed through the Unicorn Unified API.

Step 2 The Cloud Application Product Manager manually selects the desired cloud offerings
to deploy the application.

Post The Cloud Application will be deployed to the selected cloud offerings.
Condition/Result
Alternate Flow

Step 2-a The Cloud IDE Plugin displays all fitting cloud offerings based on defined placement
constraints.

Exceptional Flow

Exception 2 There is no provider capable to host the deployable cloud application. A respective
exception is thrown.

UC.15: Define application placement conditions

Table 19: Define application placement conditions

Use Case ID UC.15

Name Define application placement conditions

Description/Actor The Cloud Application Developer will be able to define placement constraints for the
entire service graph.

68
D1.2 Unicorn Reference Architecture

Requirements FR.11

Precondition A valid service graph exists.

Flow

Step Action

Step 1 The Cloud Application developer opens the service graph of the cloud application in
the service graph and policies editor.
Step 2 For each Microservice a specific minimum runtime characteristic is provided.

Step 3 The Cloud Application Developer adds additional deployment constraints (e.g.
location, security rules).

Post Defined placement conditions are respected by the Application Placement


Condition/Result Optimization module during deployment process.

Alternate Flow

Step 1 Existing constraints of a service graph are loaded.

Step 2 Existing constraints are modified and new ones are created.

Exceptional Flow

Exception 1 Service graph is not properly loaded.

69
D1.2 Unicorn Reference Architecture

UC.16: Develop code annotation libraries

Table 20: Develop code annotation libraries

Use Case ID UC.16

Name Develop code annotation libraries

Description/Actor The Unicorn Developer will develop, maintain and modify code annotation libraries that
will be provided to Unicorn Cloud Application Developers for annotating their code.

Requirements FR.17

Precondition -

Flow

Step Action

Step 1 The Unicorn Developer develops metadata code annotations libraries for monitoring,
resource management, security and data privacy enforcement policies and constraints.

Step 2 Code annotation library is added to the Unicorn platform.

Post condition/ The Cloud Application Developers will have at their disposal the code annotation libraries
Result which they will use to annotate their code to synchronize the application business logic
with the Core Context Model.

Alternate Flow

Step 1-a The Unicorn Developer modifies and manages metadata code annotations libraries for
monitoring, resource management, security and data privacy enforcement policies and
constraints.

Step 1-b The Unicorn Developer develops new metadata code annotations libraries.

Exceptional Flow

Exception 1 Library modifications by the Unicorn Developer causes incompatibilities with previous
versions of the library.

70
D1.2 Unicorn Reference Architecture

UC.17: Develop enablers enforcing policies via code annotations

Table 21: Develop enablers enforcing policies via code annotations

Use Case ID UC.17

Name Develop enablers enforcing policies via code annotations

Description/Actor The Unicorn Developer creates handlers that enforce policies via interpreted Unicorn
annotations.

Requirements FR.18

Precondition Code Annotation Libraries exist and Core Context Model is available.

Flow

Step Action

Step 1 The Unicorn Developer develops the respected enabler to enforce runtime policies via
code annotations (e.g., monitoring, auto-scaling, security, privacy).

Step 2 Enabler to enforce runtime policies is stored in dedicated repository.

Post A new enabler is available in Unicorn platform.


condition/Result

UC.18: Provide abstract description of programmable cloud execution environment through unified API

Table 22: Provide abstract description of programmable cloud execution environment through unified API

Use Case ID UC.18

Name Provide abstract description of programmable cloud execution environment through


unified API

Description/Actor The Unicorn Developer describes programmable cloud execution environments


offered within Unicorn through a unified API.

Requirements FR.15

Precondition The Cloud Provider is registered in Unicorn and has established a bridge between his
cloud offering and Unicorn.

Flow

71
D1.2 Unicorn Reference Architecture

Step Action

Step 1 The Unicorn Developer provides an abstract model describing resources and
capabilities of underlying programmable cloud execution environments.

Step 2 The Unicorn Developer develops a unified API based on that model.

Post An abstract description of cloud offerings is developed.


Condition/Result

UC.19: Develop and use orchestration tools for (multi-)cloud deployments

Table 23: Develop and use orchestration tools for (multi-)cloud deployments

Use Case ID UC.19

Name Develop and use orchestration tools for (multi-)cloud deployments

Description/Actor The Unicorn Developer will be able to create or use middleware software capable of
(de-) reserving resources and services by multiple cloud providers.

Requirements FR.16

Precondition Registered Cloud Providers with Unicorn and their offerings are available, Unified API
exist.
Flow

Step Action

Step 1 The Unicorn Developer develops adapters for each registered cloud provider
respecting the Unified API.
Post Unicorn Platform has the ability to support multi-cloud (de-) reservation of services
Condition/Result and resources.
Alternate Flow

Step 1 Unicorn Developer modifies an adapter for a registered cloud provider.

Exceptional Flow

Exception 1 Inconsistency between developed adapters and the respected providers.

72
D1.2 Unicorn Reference Architecture

UC.20: Manage programmable infrastructure, service offerings and QoS

Table 24: Manage programmable infrastructure, service offerings and QoS

Use Case ID UC.20

Name Manage programmable infrastructure, service offerings and QoS

Description/Actor The Cloud Provider is able to manage the service offerings, programmable
infrastructure and QoS through the unified API.

Requirements FR.19

Precondition Unified API must be available.

Flow

Step Action

Step 1 The Cloud Provider should express the offered services and QoS capabilities using the
derived Unicorn Model.

Step 2 The Cloud Provider uses the Cloud Offerings Manager of the Unicorn Platform to
register the offered services and QoS capabilities via the Unicorn Unified API.

Post A cloud offering is available and manageable within the Unicorn framework.
Condition/Result
Alternate Flow

Step 2 The Cloud provider modifies the offered services and QoS capabilities via the Unicorn
Unified API.
Exceptional Flow

Exception 2 Service offerings and QoS capabilities from the Cloud Provider could not be matched
to the Unicorn model.

UC.21: Ensure secure data migration across cloud sites and availability zones

Table 25: Ensure secure data migration across cloud sites and availability zones

Use Case ID UC.21

Name Ensure secure data migration across cloud sites and availability zones

73
D1.2 Unicorn Reference Architecture

Description/Actor The Cloud Application Administrator is able to migrate date securely across sites and
availability zones based based on user constraints and data access policies.

Requirements FR.22

Precondition Deployment assembly, Privacy Enforcement enabler are implemented.

Flow

Step Action

Step 1 The Unicorn platform, through the Privacy Enforcement enabler, checks a data migration
request considering user constraints and data access policies.

Step 2 A secure channel is established between source and target cloud site for data migration.

Step 3 Data migration takes place.

Step 4 The Unicorn Dashboard informs the Cloud Application Administrator about the
successful data migration.

Post condition/ Data migration has been conducted successfully.


Result
Exceptional Flow

Step 1-a When a data access policy or a user constraint policy is violated, the data migration
request is denied.

Step 3-a Data migration fails.

UC.22: Ensure security and data privacy standards

Table 26: Ensure security and data privacy standards

Use Case ID UC.22

Name Ensure security and data privacy standards

Description/Actor The Cloud Provider enforces security and data privacy standards.

Requirements FR.24

Precondition Cloud offerings are available.

74
D1.2 Unicorn Reference Architecture

Flow

Step Action

Step 1 The Unicorn Administrator based on internationally accepted standards consults with
the Cloud Providers advertised specifications and auditor certificates to update the
Cloud Offerings Repository.

Step 2 The Cloud Provider offerings are filtered based on application requirements (specified
on annotations or policies) over security and data privacy standards.

Post condition/ The list of available security and privacy standards per cloud offering is available.
Result

UC.23: Monitor network traffic for abnormal or intrusive behaviour

Table 27: Monitor network traffic for abnormal or intrusive behaviour

Use Case ID UC.23

Name Monitor network traffic for abnormal or intrusive behaviour

Description/Actor] The Cloud Provider is able to detect abnormal and intrusive cloud network traffic.

Requirements FR.24, FR.23

Precondition Cloud Application deployed with installed security agent.

Flow

Step Action

Step 1 Traffic monitoring data are stored in the log facility of the Security Agent (IDS).

Step 2 The Security Agent (IDS) detects abnormal and intrusive cloud network traffic.

Step 3 Detected events are stored in the Monitoring Data repository.

Step 4 The Unicorn Dashboard informs the application admin about detected abnormal traffic
events.

75
D1.2 Unicorn Reference Architecture

Post condition/ Detected abnormal traffic events are collected and made available to the Unicorn
Result interested entities (e.g., Cloud Application Administrator).

Alternate Flow

Step 2 The Security Agent (IDS) does not detect any abnormal and intrusive cloud network
traffic.

Exceptional Flow

Exception 2 The Security Agent (IDS) fails.

UC.24: Manage Unicorn core context model

Table 28: Manage the Unicorn core context model

Use Case ID UC.24

Name Manage the Unicorn core context model

Description/Actor The Unicorn Administrator will have the capability to define new instances in the
context model which will be taken under consideration during runtime.

Requirements FR.13

Precondition An existing core context model is provided as a base for extension.

Flow

Step Action

Step 1 The Unicorn Administrator loads the existing formal model in the Context Model
editor.
Step 2 Specific Class instances are created.

Step 3 The Context Model is validated and saved.

Post A new instance in the context model is saved.


Condition/Result
Exceptional Flow

76
D1.2 Unicorn Reference Architecture

Exception 1 The core context model is not loaded.

Exception 2 The new model is not valid.

UC.25: Manage enablers enforcing policies via code annotations

Table 29: Manage enablers enforcing policies via code annotations

Use Case ID UC.25

Name Manage enablers enforcing policies via code annotations

Description/Actor The Unicorn Administrator will have the ability to (de-)register and modify enablers
enforcing policies.

Requirements FR.14

Precondition Valid Core Context Model, Developed Enablers and Enablers Management Component
should be available.
Flow

Step Action

Step 1 The Unicorn Administrator uses the Enablers Management component to manage
enablers.
Step 2 The Unicorn Administrator selects the management action he wants to perform, e.g.
register a new enabler, deregister an existing enabler, updating existing enabler.
Step 3 The Enabler Management component performs the selected changes.

Post The Unicorn Enablers are successfully managed.


Condition/Result
Exceptional Flow

Exception 1 The Enablers management component fails to update the enabler.

77
D1.2 Unicorn Reference Architecture

UC.26: Manage cloud application owners

Table 30: Manage cloud application owners

Use Case ID UC.26

Name Manage cloud application owners

Description/Actor The Unicorn Administrator will be able to register and manage cloud application
owners.

Requirements FR.12

Precondition -

Flow

Step 1 A Cloud Application Owner uses the Cloud IDE Plugin and fills the registration form.

Step 2 The Unicorn Administrator approves the Owner Registration and access is granted to
the Cloud Application Owner.

Post New Cloud Application Owner is registered in Unicorn platform.


Condition/Result

Alternate Flow

Step 2-a The Unicorn Administrator revokes access to a particular Cloud Application Owner.

Step 2-b The Unicorn Administrator modifies information of a particular Cloud Application
Owner.

Step 2-c The Unicorn Administrator removes a Cloud Application Owner.

Exceptional Flow

Exception 2 The Cloud Application Owner cannot be approved.

Exception 2-b Cloud Application Owner information cannot be modified.

Exception 2-c The Cloud Application Owner cannot be deleted due to cascading issues.

78
D1.2 Unicorn Reference Architecture

6 Unicorn Demonstrators
In this section, the Unicorn demonstrators are elaborated as perceived in the initial stages of the project
implementation. In particular, a brief overview of the functionalities along with details on their technical as-is
implementation (in terms of architecture and technology stack) is provided for each demonstrator. The business
and technical challenges are also described in order to demonstrate the emerging, real-life need for the Unicorn
platform. Finally, the relevance of the Unicorn Use Cases (presented in Section 5) is discussed for each
demonstrator case.

The Unicorn demonstrators have been refined in respect to the Unicorn Description of Action in order to ensure
better alignment with the needs of the project. As explained in the following paragraphs, the Unicorn
demonstrators cover a wide, representative spectrum of cloud applications, ranging from big data analytics
(Demonstrator #1: Enterprise Social Data Analytics as described in section 6.1) and encrypted voice
communication (Demonstrator #2: Encrypted Voice Communication Service over Programmable Infrastructure
as described in section 6.2) to gaming (Demonstrator #3: Prosocial Learning Digital Game as described in section
6.3) and cloud development platforms (Demonstrator #4: Cyber-Forum Cloud Platform for Startups and SMEs
as described in section 6.4).

Further details on the demonstrators evaluation strategy that will allow the validation and evaluation of the
deployed Unicorn mechanisms in the pilot testbeds in the project is anticipated in WP6 Demonstration. In
particular, D6.1 Evaluation Framework and Demonstrators Planning will define the exact acceptance criteria
per demonstrator and the associated Key Performance Indicators (along with the methodology for as-is and to-
be measurements).

6.1 Enterprise Social Data Analytics

6.1.1 Overview
The S5 Enterprise Data Analytics Suite*Social is an enterprise data analytics engine-as-a-service addressed to the
needs of modern businesses to track their online presence, to understand the sentiment and opinions about
their products and brands, and to distill customer needs and market trends. Built on an open-source big data
technology stack, the S5 Enterprise Data Analytics Suite*Social offers T-S-P-V-I functionalities, translated as:

- Track. Based on the settings provided by each enterprise (i.e. selected keywords, pages, accounts,
sources and timeframe, organized in projects that represent the domain of interest) and taking into
account the domain-specific knowledge (i.e. ontologies, taxonomies, dictionaries), the S5 Enterprise
Data Analytics Suite*Social retrieves unstructured data from selected web resources (RSS feeds), and social
data from selected social networks APIs (e.g. Twitter and Facebook).
- Store. All data collected are appropriately filtered (removing noise), harmonized and stored (before
and during processing) in an intelligent storage mechanism that facilitates retrieval, processing, indexing
and scaling.
- Process. By running a variety of algorithms in the background, real-time processing is performed and
results into automatic polarity and emotion detection, opinions and topics extraction, trends analysis
and prediction, and influencer identification.
- Visualize and Interact. Through intuitive and customizable dashboards, an enterprise is able to navigate
to emerging topics, sentiments and trends of interest in an interactive manner and save / export the
queries that have been executed, in order to revisit / review the results whenever needed.

79
D1.2 Unicorn Reference Architecture

In brief, the S5 Enterprise Data Analytics Suite*Social is based on keyword-, account- and page-based information
acquisition, information filtering, natural language processing, trend analysis and emotion analysis. Various
analytics algorithms and hybrid machine learning techniques are applied to extract relevant topics and
actionable data, to detect influencing behavior (through variations of the PageRank algorithm) and to back-trace
or simply follow the trail of retrieved data. In its intuitive dashboard, it provides a playground for
experimentation, with easy navigation to the results and smart filtering options for the user-friendly
visualizations (e.g. to remove promo material). Finally, through its collaboration features, the S5 Enterprise Data
Analytics Suite*Social allows a team to get access to the same project settings, to share live (in terms of
constantly updated) reports, and to contribute their comments and ideas as inspired by the social media
discussions and other online sources (with social features).

It needs to be noted that Demonstrator #1 was initially entitled Cloud-Based Personal Activity Tracking for the
Internet of Things in the Unicorn Description of Action yet it was decided that the focus should change from
personal data analytics to big data analytics for enterprises in order to test the Unicorn platform in a more
demanding and cloud resources-intensive business case.

6.1.2 Technical Implementation


The S5 Enterprise Data Analytics Suite*Social consists of 4 layers that are designed in a decoupled manner and
communicate seamlessly through RESTful APIs, as depicted in the platforms high-level architecture in Figure 21.

Figure 21: S5 Enterprise Data Analytics Suite*Social Architecture

80
D1.2 Unicorn Reference Architecture

The Data Collection Layer is responsible for collecting data according to the enterprises preferences and settings
at real time or in cron jobs through RESTful connectors and crawlers. The relevant data are tracked and
harmonized in an appropriate manner to maintain their provenance and are stored in the Scalable Storage Layer.

The Big Data Analysis Layer is at the core of the S5 Enterprise Data Analytics Suite*Social since it performs all
processing and computation tasks for document-level sentiment analysis, going beyond typical polarity
detection. Built on Apache Spark, this layer filters and cleans data prior to applying various Natural Language
Processing (NLP) techniques and machine learning algorithms for emotion analysis, correlation mining, topic
extraction and influencer detection.

The Scalable Storage Layer provides 3 core storage-relevant modalities: (a) Storage for domain ontologies,
trained models, dictionaries and vocabularies, (b) Storage for computation outcomes of the Big Data Analysis
Layer at its different (intermediate and final) stages and of the original data from the Data Collection Layer, (c)
Indexing for performing demanding queries and combining many types of searches on the data stored. In
addition, the Data Policies Enforcement examines and allows or denies authorization of any query to access
specific team project or report settings according to the enterprise subscription. This layer is mainly based on
Elasticsearch [124] and Couchbase [125].

The Interaction Layer handles all user interactions and provides social data-related insights into the collected
and derived data. The user is able to organize the topics of interest, perform and save queries in an intuitive way
and customize the analytics dashboard that visualizes the results at real-time. Through the experimentation
playground, the user acquires full access to the results of the Big Data Analysis Layer and performs a kind of do-
it-yourself analysis. In this layer, the team collaboration, the project settings and the team management
functionalities are also provided. The Interaction Layer is built on the Django web framework and uses Vue.js
[126], Chart.js [127] and Kibana [128].

Overall, the S5 Enterprise Data Analytics Suite*Social is developed using Python and JavaScript.

6.1.3 Business and Technical Challenges


As in any cloud-based data analytics engine, the development, expansion and the deployment of the S5
Enterprise Data Analytics Suite*Social faces a set of inherent challenges at business and technical levels that
typically require strategy decisions and continuous development, integration and deployment efforts,
respectively, in order to ensure uninterrupted performance and availability. In more detail, with the S5
Enterprise Data Analytics Suite*Social residing in a public cloud, the main business and technical challenges that
are currently faced can be listed as follows:

Technical Challenge 1: Data volume handling and computation scalability. The data volume collected per project
and per enterprise adds computational complexity and as the scale becomes larger, even trivial operations tend
to become costly. With regard to the various algorithms applied in the S5 Enterprise Data Analytics Suite*Social,
the time needed to perform the necessary computations increases exponentially and real-time processing
becomes less and less possible with the increasing data size. Unexpected and exponential increase of data
volume, if not manually addressed at vertical scaling level (with more CPUs, memory and storage), may also
result in data loss and data replication issues in the Data Storage layer (between Elasticsearch and Couchbase).

Technical Challenge 2: Algorithm experimentation and fine-tuning. Depending on the domain that is of interest
to an enterprise, the algorithms need to be fine-tuned to take into account domain knowledge and to select

81
D1.2 Unicorn Reference Architecture

new, highly relevant features to make machine learning perform better. In addition, the bundle of algorithms
that are supported in the S5 Enterprise Data Analytics Suite*Social needs to expand towards additional machine
learning libraries or platforms (e.g. for incremental learning, deep learning) in an experimental, agile way
(assume-test-evaluate-change), which requires continuous development, integration and deployment activities
as well as overcoming the difficulties encountered when working in high dimensional space.

Technical Challenge 3: Performance optimization. At the moment, the S5 Enterprise Data Analytics Suite*Social
requires an initial set-up time of some hours from the moment the user creates a project till the moment he can
navigate to the results in order for all algorithms to produce results irrespectively of the exact project settings
(in terms of data volume, variety and velocity). In addition, the projects that an enterprise creates and are shared
within its team are independently stored (in separate buckets), but share the same computation resources
without any further provisions for data load balancing which affects the overall system performance an
enterprise experiences. Certain data collection tasks and analytics algorithms offered by the S5 Enterprise Data
Analytics Suite*Social are also currently limited to being delivered over nightly processes, while the most
demanding algorithms (e.g. for influencer detection) are only executed on a bi-weekly basis.

Technical Challenge 4: Data security and privacy. Although the data collected and processed by the S5 Enterprise
Data Analytics Suite*Social are public, attention needs to be paid to data security and privacy aspects that are
currently handled through hard-coded functions without any flexibility for different policies depending on the
enterprise practices or the exact data nature.

Technical Challenge 5: Avoid cloud vendor lock-in. Although experimentation and tests deployments have been
running in different cloud providers (e.g. Microsoft Azure, AWS, DigitalOcean, Heroku, clouding.io) at
development time, the production environment of the S5 Enterprise Data Analytics Suite*Social is deployed in a
single cloud provider which implies dependence on a single provider and does not allow for any adaptations at
run time or per enterprise.

Business Challenge 1. Optimize pricing model. The S5 Enterprise Data Analytics Suite*Social business plan is based
on enterprise subscription plans yet it is difficult to forecast, monitor and match cloud resource demand per
enterprise with cloud vendor service supply. Through real-time deployment at affordable prices in different
cloud providers, the S5 Enterprise Data Analytics Suite*Social may experiment towards pay-as-you-go (PAYG)
pricing models at enterprise and project levels.

With the help of Unicorn, SUITE5 expects that the S5 Enterprise Data Analytics Suite*Social will be able to spin up
and deploy, at will, new and disposable unikernel execution environments for each enterprise and for specific
enterprise projects to perform on-demand intensive analytic jobs of different data volumes. This will not only
allow S5 Enterprise Data Analytics Suite*Social to reinforce its real-time analytics processes as a whole, but also to
better schedule batch/cron jobs, according to different criteria such as the accumulated data per time unit, the
selected algorithm and its expected execution duration and the daily pricing schedules of each cloud provider.
Regarding the continuous development of the Data Analytics Suite, the collaboration through workspace sharing
offered by Unicorn s IDE plugin and the continuous integration and deployment through the Unicorn platform,
will help to ensure the uninterrupted performance and availability of the data analytics engine. Finally, from a
business perspective, Unicorn will help towards adoption of more flexible PAYG pricing models.

82
D1.2 Unicorn Reference Architecture

6.1.4 Demonstrator Relevance to Unicorn Use Cases


The importance and relevance of the Unicorn Use Cases for the Enterprise Social Data Analytics Demonstrator
is presented in Table 31.

Table 31: Enterprise Social Data Analytics Relevance to Use Cases

ID Use Case User Roles Relevance to


demonstrator
UC.1 Define runtime policies and constraints Application Developer High
UC.4 Deploy Unicorn-compliant cloud application Application Admin High
UC.5 Manage the runtime lifecycle of a deployed Application Admin High
cloud application
UC.6 Manage privacy preserving mechanisms Application Admin High
UC.8 Monitor application behavior and performance Application Admin High
UC.9 Adapt deployed cloud applications in real-time Application Admin High
UC.11 Perform deployment assembly validation Application Tester High
UC.13 Manage cloud provider credentials Application Product Manager, High
Application Administrator
UC.14 Search for fitting cloud provider offerings Application Product Manager High
UC.15 Define application placement conditions Application Developer High
UC.17 Develop enablers enforcing policies via code Unicorn Developer High
annotations
UC.18 Provide abstract description of programmable Unicorn Developer High
cloud execution environments through unified
API
UC.19 Develop and use orchestration tools for (multi- Unicorn Developer High
)cloud deployments
UC.20 Manage programmable infrastructure, service Cloud Provider High
offerings and QoS
UC.21 Ensure secure data migration across cloud sites Unicorn Developer High
and availability zones
UC.22 Ensure security and data privacy standards Cloud Provider High
UC.23 Monitor network traffic for abnormal or Cloud Provider High
intrusive behavior.
UC.2 Develop Unicorn-enabled cloud applications Application Developer Medium
UC.3 Package Unicorn-enabled cloud applications Application Product Manager Medium
UC.7 Manage security enforcement mechanisms Application Admin Medium
UC.10 Get real-time notifications about security Application Admin Medium
incidents and QoS guarantees
UC.16 Develop code annotation libraries Unicorn Developer Medium
UC.24 Manage Unicorn core context model Unicorn Admin Medium
UC.25 Manage enabler enforcing policies via Unicorn Unicorn Admin Medium
code annotations
UC.26 Manage cloud application owners Unicorn Admin Medium
UC.12 Perform security and benchmark tests Cloud Provider, Application Low
Admin

83
D1.2 Unicorn Reference Architecture

6.2 Encrypted Voice Communication Service over Programmable Infrastructure

6.2.1 Overview
Unicorn will be evaluated under real life scenarios by UBITECH, through pilot testing of the solution that will be
created and by evaluating the added value that Unicorn provides and with main focus on the orchestration and
elasticity features of Unicorn. For this testing ubi:Phone, an encrypted VoIP telephony service, will be used.
ubi:Phone is an IP telephony service developed by UBITECH that uses encrypted packets with Voice-over-IP
(VoIP) protocols to establish secure mobile end-to-end communication over the internet, and is available for
Android. It ensures call participants privacy by blocking any kind of interception and eavesdropping attacks
through intermediate hardware (e.g., PBXs, servers, routers) and prevents any interception and man-in-the
middle attacks. The ubi:Phone service is targeted specifically to mobile devices, using audio codecs and buffer
algorithms tuned to the characteristics of mobile networks and push notifications in order to better preserve
device battery. ubi:Phone is offered across IP networks and tailored for the demanding security needs with
support for the cryptographic key-agreement Zimmermann Real-time Transport Protocol (ZRTP) and for the
voice encryption Secure Real-time Transport Protocol (SRTP).

As ubi:Phone is a service that is demanding on resources, it is important for UBITECH to take advantage of
Unicorn in order to realize ubi:Phone as an encrypted VoIP telephony service based on cloud infrastructures.

6.2.2 Technical Implementation


The main components of the ubi:Phone encrypted voice IP telephony service are the following: (i) Service
Registration Component: it regards the registration of the end users mobile device to a centralized registry.
Registration is unique per device and call participant based on hashing. (ii) Connection Establishment
Component: it regards the instantiation of the secure endpoints per call participant and the setup of a routing
path between these components. Such a path enables the mobile end-to-end communication between the
involved call participants. (iii) ZRTP Protocol Key Exchange & Encrypted VoIP Provision Component: it regards
the exchange of ZRTP keys between the call participants over the established routable path and the provision of
the encrypted VoIP service along with the monitoring of the overall performance in real-time.

In ubi:phone, the security issues of both for the client and the server are severely taken into account and fully
addressed, as security certificates are distributed with the application clients and the Individual application
servers have certificates that are signed by and validated against the certificate of ubi:phone, eliminating any
requirement to trust unknown Certificate Authorities (CAs). In order to prevent the Man-in-the-Middle attack
(MITM), two-way Secure Socket Layer (SSL) authentication is used, so that the client application verifies the
identity of the server application, and then the server application verifies the identity of the SSL-client
application.

From the technological point of view, ubi:Phone back-end business logic is developed using the Spring
Framework and Java 8 and is completely state-less in order to achieve horizontal scalability. The Android
application of the client has been developed using the native Android SDK and state-of-the-art encryption
algorithms and cryptographic protocols in order to provide to users the best possible effort among security
issues. In specific the following algorithms and protocols are implemented:

84
D1.2 Unicorn Reference Architecture

Data Encryption Algorithm: Advance Encryption Standard 256 bit (AES-256)


The algorithm described by AES is a symmetric-key algorithm, meaning the same key is used for both
encrypting and decrypting the data.

Cryptographic Protocol: Secure Sockets Layer (SSL) SSL are cryptographic protocols that provide
communication security over the Internet. SSL uses symmetric encryption for confidentiality (AES-256
in our solution).

Cryptographic Key-Agreement Protocol: Zimmermann Real-time Transport Protocol (ZRTP)


ZRTP is a cryptographic key-agreement protocol to negotiate the keys for encryption between two end
points in a Voice over Internet Protocol (VoIP) phone telephony call based on the Real-time Transport
Protocol.

Voice Encryption Protocol: Secure Real-time Transport Protocol (SRTP)


The Secure Real-time Transport Protocol (or SRTP) defines a profile of RTP (Real-time Transport
Protocol), intended to provide encryption, message authentication and integrity, and replay protection
to the RTP data in both unicast and multicast applications.

6.2.3 Business and Technical Challenges


The UBITECH demonstrator will examine the potentials of its own encrypted VOIP service by exploiting the
deployment programmable and reconfigurable infrastructure in order to limit voice latency that is introduced
by the computational overhead and increase security.

In more details, the main business and technical challenges that are currently faced can be listed as follows:

Technical Challenge 1: Limit Voice Latency by using lightweight virtual containers over scalable infrastructure.
Voice latency is the actual penalty for establishing real-time and secure end-to-end VoIP communication
services. This backend VOIP services of ubi:Phone are currently centralized and deployed to dedicated
infrastructure. Improving the performance requires deployment of dedicated resources and scaling through load
balancing techniques, this however imposes great overhead of both time and cost. Using the micro-service
paradigm suggested by Unicorn and the programmable infrastructure can be the ideal solution for a service like
ubi:Phone.

Technical Challenge 2: Improve Voice Latency by de-centralizing secure VoiP services. In real word scenarios that
require secure communication, call participants using their mobile devices scattered across the globe, with
latency increasing as more geo-distributed participants are added to the call. Hence, voice delay which is often
found frustrating, discourages users from using encrypted VoIP telephony by sacrificing their privacy for less
secure services. Regarding this very challenging issue, UBITECH plans to design and release a new version of the
ubi:Phone service (deployed through the Unicorn platform), where upon call initiation ubi:Phone services are
immediately spawn near call participants locations to carry the encryption schema and cope with the overhead
imposed to establish secure, scalable and actual real-time mobile end-to-end VoIP communication.

Technical Challenge 3: Enhancing Security in ubi:Phone. As a security enhancing product, ubi:Phone s security
is highly critical and one of the main challenges for UBITECH. One of the basic ways of increasing the security
level is to reduce the attack surface that malicious attackers can target. Using containers that contain only the
minimum of the needed libraries for each component, or even better using unikernels can greatly improve the

85
D1.2 Unicorn Reference Architecture

security of ubi:Phone by reducing the attack surface. Also the security mechanisms that is currently utilized, like
the security monitoring of the VOIP services, on the de-centralized and scalable version of ubi:Phone have to be
redesigned.

In summary, UBITECH expects to use Unicorn for the advance its own encrypted VOIP service to limit voice
latency that is introduced by the computational overhead and increase security. Issues of both scalability and
locality will be addressed in order to exploit multi-site programmable and reconfigurable infrastructure.
Improvements on the security of the ubi:Phone service will also be realized, due to the new development
paradigm that will be followed and the privacy and security mechanisms offered by Unicorn.

6.2.4 Demonstrator Relevance to Unicorn Use Cases


The importance and relevance of the Unicorn Use Cases for the Encrypted Voice Communication Service
Demonstrator is presented in the following table.

Table 32: ubi:phone Relevance to use cases

ID Use Case User Roles Relevance to


demonstrator
UC.1 Define runtime policies and constraints Application Developer High
UC.2 Develop Unicorn-enabled cloud applications Application Developer High
UC.3 Package Unicorn-enabled cloud applications Application Product Manager High
UC.4 Deploy Unicorn-compliant cloud application Application Admin High
UC.5 Manage the runtime lifecycle of a deployed cloud Application Admin High
application
UC.6 Manage privacy preserving mechanisms Application Admin High
UC.7 Manage security enforcement mechanisms Application Admin High
UC.9 Adapt deployed cloud applications in real-time Application Admin High
UC.10 Get real-time notifications about security Application Admin High
incidents and QoS guarantees
UC.11 Perform deployment assembly validation Application Tester High
UC.15 Define application placement conditions Application Developer High
UC.17 Develop enablers enforcing policies via code Unicorn Developer High
annotations
UC.19 Develop and use orchestration tools for (multi- Unicorn Developer High
)cloud deployments
UC.20 Manage programmable infrastructure, service Cloud Provider High
offerings and QoS
UC.22 Ensure security and data privacy standards Cloud Provider High
UC.23 Monitor network traffic for abnormal or intrusive Cloud Provider High
behaviour.
UC.25 Manage enabler enforcing policies via Unicorn Unicorn Admin High
code annotations
UC.8 Monitor application behaviour and performance Application Admin Medium
UC.13 Manage cloud provider credentials Application Product Manager, Medium
Application Administrator
UC.14 Search for fitting cloud provider offerings Application Product Manager Medium
UC.16 Develop code annotation libraries Unicorn Developer Medium

86
D1.2 Unicorn Reference Architecture

UC.18 Provide abstract description of programmable Unicorn Developer Medium


cloud execution environments through unified
API
UC.24 Manage Unicorn core context model Unicorn Admin Medium
UC.12 Perform security and benchmark tests Cloud Provider, Application Low
Admin
UC.21 Ensure secure data migration across cloud sites Unicorn Developer Low
and availability zones
UC.26 Manage cloud application owners Unicorn Admin Low

6.3 Prosocial Learning Digital Game

6.3.1 Overview
The Prosocial Learning Digital Game demonstrator is a cloud-based, multi-player game in which players develop
specific social skills. The game is founded on the hypothesis that children at risk of social exclusion, lacking
empathy and showing high levels of aggressive or anti-social behaviours, should benefit from digital games
tailored to teach prosocial skills. The game will, within the Unicorn project, provide developers in the video
gaming industry with a demonstrator of the use of Unicorn-enhanced cloud computing, enabling rapidly
scalable, costeffective provisioning of IT infrastructure on demand, while still satisfying particularly strict
security and privacy requirements.

The Prosocial Learning Digital Game will be prepared for the implementation of personalised avatars, through
Redikod's legacy General Avatars framework, and built on the openly available Unity-based uMMORPG
framework [129]. This, in principle, enables a platform where it is easy to implement new scenarios for games,
where, hypothetically, a tool-box could be provided for teachers and students to set up game scenarios with a
brief narrative and possibly even building dialogue trees with NPCs (non-player character). However, the
implementation challenges regarding the PsL technology and platform and time consumed in getting the game
to the point it is today, most likely push any such efforts beyond the scope of the ProsocialLearn EU Funded
project [130], to a possible spin-off phase. ProsocialLearn project aims to use gamification of prosocial learning
for increased youth inclusion and academic achievement. It intends to deliver a series of disruptive innovations
for the production and distribution of prosocial digital games that engage children, as well as stimulate
technology transfer from the games industry to the educational sector. It is certainly within the scope of the
Unicorn project, to consider this level of complexity in the games provisioning.

6.3.2 Technical Implementation


The game is built on a foundation that not only enables Redikod, the developer, to efficiently design new
scenarios and settings, but in a planned extension also allowing teachers and students, jointly or separately, to
designing new prosocial games, with challenges providing prosocial skills training opportunities that can be
shared with others, globally. The Prosocial Learning Digital Game is intended, of course, only as a first, current
prototype of this. So, it may very well change dramatically, from what is now presented here, during the course
of the Unicorn project.

Redikod is using Unity [131] for game client and game server, with the uMMORPG multi-player game base, which
have been heavily modified. Both the game client and game server are running together in a docker container
along with nginx [132] to host the game client, that is a WebGL build [133], and a nodejs [134] server using

87
D1.2 Unicorn Reference Architecture

expressjs [135] for external communication to a server in Madrid that is currently hosting the game along with
controlling the game lifecycle. Redikod is currently implementing a voice chat solution that will be using peerjs
[136], to achieve this.

Voice and face sensors to capture data corresponding to the users' engagement in the game are also being used.
This is something that can later be used to serve content in a way that resonates better with the user. For this
reason, WebRTC with its getUserMedia function call is used which is something that the peerjs library also
requires.

One important barrier encountered with Unity is that it does not actually support secure websockets. To work
around this, Redikod had to include a plugin with the WebGL build that changes all ws calls to wss calls. That
does not work with the server though, so there nginx is used as a reverse proxy so the unity server can use
unsecure websockets but the client connects with secure ones. This is of course required if the aim is to serve it
on a https page, which is of course increasingly desirable.

The integration of player-editable avatars has not yet begun, but the legacy code for General Avatars has
recently been successfully ported to HTML5 and WebGL, thus providing a basis for highly increased player
engagement, while also stressing the need for efficient and secure big data handling and communication.

6.3.3 Business and Technical Challenges


Redikod is currently, as an example of practical technical issues currently at hand, encountering issues with voice
chat because of not having access to the ssl certificate that is being used to serve the webpages and services.
That means that the peerjs server cannot be served from the same location that the game is served, the central
ProsocialLearn platform infrastructure, that is. Redikod is currently working on setting up a peerjs server on a
separate webserver to make this work. There are also some issues with getting peerjs and the voicesensor to
work simultaneously, especially in a pushtotalk environment.

Technical Challenge 1: Data volume handling. This game context presents both scalability and volatility
constraints, not adhering to a specific schema, being constantly updated and requiring the management of
powerful storage/indexing/query engines running on tens, hundreds, or potentially even thousands of servers.

Technical Challenge 2: Rapid scaling. The ability to provide their games in a situation of rapidly growing,
international consumer demand is an absolute requirement for talented, new European SMEs in the games field.
The players of games have an enormous, alwaysexpanding market offering to choose from. Disappointing a
potential customer only once results, as a rule, in them not only never returning, but often also in them voicing
their disappointment in public.

Technical Challenge 3: Security and authentication. In addition to this, secure authorization and authentication
issues should be taken into account so as to ensure that the correct educational professionals are handling the
data, as well as identifying students in an anonymous but secure fashion. Security is a major concern, though,
certainly as Redikod is addressing public markets and children, but certainly also from a business protection
perspective, as it is critical for any SME not to unnecessarily lose revenue to imitators, illegal copying and the
like.

Technical Challenge 4: Compliance. Another critical issue is the geographic data restriction, as data from a specific
country's educational institutions might not be allowed to leave that particular country's territory due to privacy

88
D1.2 Unicorn Reference Architecture

and security laws adopted by individual countries, while certainly also adhering to EU data restriction directives.
Similar restrictions especially apply to minors, of course, also outside educational contexts. It can of course be
argued that this is a business, rather than a technical, challenge, but it is in any case potentially resolved by the
Unicorn platform.

Business Challenge 1: Distribution and efficiency. The immediate business issues are centered on immature
markets and distribution structures for learning games in particular, and for serious games in general.
ProsocialLearn aims to address this also beyond the scope of the project, aiming for a spinoff utilizing the
platform and content developed throughout the project.

With the help of Unicorn, Redikod expects that the Prosocial Learning Digital Game will be facilitated during the
development and deployment process. More specifically it is expected that Unicorn will be able to provision,
on-demand infrastructure resources at multiple cloud providers in a cost-effective manner that will also optimize
the games performance. Through Unicorn, the game will also be able to cope, at run-time, with the rapidly
growing users demand at different times and with enormous datasets that are constantly updated and changing
size by applying elasticity policies through annotations within the games source code. Security-wise, Unicorn
provides the means, via security-enforcement annotations, to keep the Prosocial Learning Digital Game secure
and handle authentication and authorization issues. Finally, the Prosocial Learning Digital Game can properly
manage privacy issues that may occur due to legislation adopted by individual countries. This issue is addressed
with the privacy enforcement mechanisms of Unicorn that will be able to restrict data movement between
different geographic regions.

6.3.4 Demonstrator Relevance to Unicorn Use Cases


The importance and relevance of the Unicorn Use Cases for the Digital Gaming Demonstrator is presented in the
following table.

Table 33: Prosocial Learning Relevance to use cases

UC ID Use Case User Roles Relevance to


demonstrator
UC.2 Develop Unicorn-enabled cloud Application Developer High
applications
UC.5 Manage the runtime lifecycle of a Application Admin High
deployed cloud application
UC.6 Manage privacy preserving mechanisms Application Admin High
UC.7 Manage security enforcement Application Admin High
mechanisms
UC.8 Monitor application behaviour and Application Admin High
performance
UC.9 Adapt deployed cloud applications in Application Admin High
real-time
UC.10 Get real-time notifications about Application Admin High
security incidents and QoS guarantees
UC.12 Perform security and benchmark tests Cloud Provider, Application Admin High
UC.14 Search for fitting cloud provider Application Product Manager High
offerings
UC.1 Define runtime policies and constraints Application Developer Medium

89
D1.2 Unicorn Reference Architecture

UC.3 Package Unicorn-enabled cloud Application Product Manager Medium


applications
UC.4 Deploy Unicorn-compliant cloud Application Admin Medium
application
UC.11 Perform deployment assembly Application Tester Medium
validation
UC.13 Manage cloud provider credentials Application Product Manager, Medium
Application Administrator
UC.15 Define application placement conditions Application Developer Medium
UC.16 Develop code annotation libraries Unicorn Developer Low
UC.17 Develop enablers enforcing policies via Unicorn Developer Low
code annotations
UC.18 Provide abstract description of Unicorn Developer Low
programmable cloud execution
environments through unified API
UC.19 Develop and use orchestration tools for Unicorn Developer Low
(multi-)cloud deployments
UC.20 Manage programmable infrastructure, Cloud Provider Low
service offerings and QoS
UC.21 Ensure secure data migration across Unicorn Developer Low
cloud sites and availability zones
UC.22 Ensure security and data privacy Cloud Provider Low
standards
UC.23 Monitor network traffic for abnormal or Cloud Provider Low
intrusive behaviour.
UC.24 Manage Unicorn core context model Unicorn Admin Low
UC.25 Manage enabler enforcing policies via Unicorn Admin Low
Unicorn code annotations
UC.26 Manage cloud application owners Unicorn Admin Low

6.4 Cyber-Forum Cloud Platform for Startups and SMEs

6.4.1 Overview
The CyberForum e. V. is one of the largest and fastest growing IT-networks in Germany and part of the largest
software cluster in Europe [137]. As a non-profit network founded in 1997, it is a support organization for high-
tech companies in the region of Karlsruhe, Germany. Its rich set of key stakeholders exceeds 1000 members,
including regional public authorities, research institutions, SMEs and Startups. In July 2013, CyberForum signed
the Business Roaming Agreement [138], which offers its members the possibility to use worldwide
infrastructure and obtain access to events, meeting points, offices and conference rooms. Hence, CyberForum
main goal is to support business development of its members. Furthermore, CyberForum participates in the EU-
funded project CentraLab, which intends to transform Europe into a topic-independent living lab for innovations,
in due consideration of social, organizational and technological aspects. CyberForum has been awarded with the
Gold Label of the European Cluster Excellence Initiative.
CAS Software AG is an active member of CyberForum e.V. and the main software provider for the network.
Therefore, CAS Software AG intends to support SMEs being members of the CyberForum e.V. by migrating their
classical on-premise software solutions into a cloud service or Startups by developing cloud applications through

90
D1.2 Unicorn Reference Architecture

the full development and solution runtime lifecycle. This support will include development support by providing
a supported development environment for cloud applications as well as management support for packaging,
deployment and lifecycle management of cloud applications included in CyberForum app marketplace extended
by Unicorn functionalities.
The provided functionality in the Unicorn project is going to be evaluated in the frame of the demonstrators. In
order to derive detailed and analysed feedback about key performance indicators, e.g. performance, usability,
cost reduction, trust extension, CAS intends to focus during the evaluation on an in-depth study with one proper
Cyber Forum member.

6.4.2 Technical Implementation


Having in mind the support for CyberForum members the main intend of CAS can be formulated as follows. In
the frame of Unicorn, CyberForum SME developers will be able to quickly develop their own xRM micro-services,
or complement existing services, by utilizing the Unicorn design libraries and hosting their services on the
Unicorn platform eco-system while reaching end-users through the CyberForum marketplace. Thus,
CyberForum marketplace will provide interested companies with access to a large selection of micro-services
(apps) and will support secure data exchange and universal integration across different apps driven by the
SMEs needs. More specifically, the new CyberForum marketplace, extended with Unicorn functionality, will
support network members, in particular SMEs and Startups, to include security, privacy and elasticity by design
features in their applications and define application characteristics, for optimal resource allocation at design-
time and runtime.

The CyberForum Cloud Platform will be built on top of CAS cloud-based anything relationship management
(xRM) software SmartWe which is designed to support customers in their daily work by providing a tailored
software tool with respect to the particular role of each user. Furthermore, SmartWe supports different ways
to tailor the basic xRM solution to user role specific tasks, e.g. by adding only needed apps or by using the
provided SDK to develop own apps and micro services.

Enabling CyberForum members to develop their own micro service in the environment of an app-based xRM
cloud software which can be adapted and extended to diverse branches and needs is very challenging from a
conceptual perspective and even more challenging from a developer perspective. CAS developed different SDK
tools to adapt the basic SmartWe application to self-written micro services. The app designer makes it possible
to perform changes directly on the UI using the declarative descriptions of the underlying SmartDesign Client
technology. Own data types and records which can be manipulated by self-written apps and micro services can
be created with the help of the graphical DB designer. The script engine available on the UI of the system
allowing context sensitive content assist supports CyberForum members by calculating and manipulating fields
on the form of their micro services.

The vision of CAS includes that there are thousands of apps and micro services available through an app
marketplace/cloud platform which is composed of three parts for the following stakeholders:

3rd party app developer


CAS as platform provider
End user

91
D1.2 Unicorn Reference Architecture

Looking at a common app/micro service publishing workflow from a high-level perspective, the following steps
are to be performed:

1. App developer uploads his app/micro service and description


2. Tests are performed to ensure functionality and security/privacy
3. App developer gets recommendation of deployment environment and deploys the app/micro service,
the app/micro service is published (existence of account and credentials at the cloud provider is a pre-
requisite)
4. End user buys the app and uses it
5. App/micro service developer and platform provider gets monitoring and usage information
6. The app/micro service deployment infrastructure scales according to the computational resources
needed and to the actual users (affecting e.g. also the size of the data storage for pre-calculated analytic
results)
7. App/Mirco service developer gets monitoring information as well to see if the app is performing well

Unicorn comes into play at different points. First, it supports the micro service developer at development stage
by providing libraries to include state of the art security and data privacy mechanisms into application code.
Furthermore, the deployment environment recommendation and automatic deployment reduces effort and
costs for start-ups and SMEs being not well experienced with deploying cloud applications. It also becomes clear
that an intelligent and performant monitoring service is essential to implement an ecosystem consisting of
different micro services/apps hosted on different (multi)cloud environments.

CAS OpenServer

CAS SmartWe is designed for anything relationship management purposes supporting strictly separated multi
tenancies. A partitioned traditional three-layer software architecture including physical separation between the
layers determines the physical architecture of CAS Open. In general, CAS Open is a network of connected servers,
operating as a Federated Cloud, over which CAS Software AG has jurisdiction. One or more CAS Open Server
instances may serve requests from different users belonging to different tenants, at the same time as depicted
in Figure 22.

Figure 22: CAS SmartWe and OPEN Deployment

92
D1.2 Unicorn Reference Architecture

Furthermore, supporting multi tenancies lead to possible hosting tenants on shared servers. To secure
sufficiently sensitive data of specific tenants related customers may be allowed to operate private nodes in the
Federated CAS Open Cloud and CAS Open architecture includes a strict separation between the data belonging
to different tenants.

The data tier consists of one or more relational database management systems (RDBMS). Separation of tenant
data is achieved by storing each in its own database. Apart from this, the data services offered are
straightforward CRUD operations (create, retrieve, update, delete). Access to data is handled via an abstraction
layer in the business logic tier, which is database-independent as shown in Figure 23. Customised services usually
require customised data types, for which the abstraction layer supports plugins for custom data access
managers.

Figure 23: CAS OPEN Architecture

The business logic tier is made up of multiple CAS Open Server instances. The CAS Open Server is one
cornerstone of the CAS Open platform and serves as a central point to create, manipulate, store and retrieve
xRM-specific data. The CAS Open Server is responsible for connecting to the DBMS and for encapsulating
database-related functionality like transactions behind a high-level API. This API is the EIMInterface, the single
gateway through which calls may come from external web services, external RMI calls or from direct
administrative requests to manipulate data.

Apart from controlling data manipulation, the CAS Open Server provides a registry that hosts business logic
operations supporting the xRM services. These operations allow addition of new tenants and users, the setting
of user preferences and passwords, the management of authorisation rights to data, the logging of all changes
to data, and the linking or tagging of data to obtain aggregated views.

An important property of the CAS Open Server is that it is stateless. No state or session handover has to be
performed if one wants to switch to another server instance. This easily enables load balancing, i.e. multiple
servers can share the workload. CAS Open Server is specifically designed for good scalability. A higher workload
due to an increasing number of tenants and users can easily be addressed by adding new server instances and
employing a load balancing mechanism. However, common administrative tasks like cache synchronisation are
still necessary.

93
D1.2 Unicorn Reference Architecture

The presentation tier is the declarative defined HTML5 SmartDesign based SmartWe. There are also native
mobile solutions for iOS and Android developed separately.

The clients receive output from the CAS Open Server business logic layer, and maintain the state of visual widgets
that are rendered on the client browser using JavaScript. Synchronization is handled by an AJAX connection,
which supports a constant trickle of data in both directions, without needing to refresh the web-page in which
the dashboard is displayed

6.4.3 Business and Technical Challenges


From a business perspective migrating on-premise software solutions to the cloud is challenging and difficult
due to lack of knowledge, time and resources. At the same time, current cloud platforms have significant
weaknesses. According to KPMG [139], the main weaknesses are outlined as follows: (i) complex and costly
development process: Developing new SaaS solutions or redeveloping existing solutions for the cloud on existing
PaaS is a complex and very costly process making it often prohibitive especially for Startups and SMEs; (ii) no
influence on elasticity and trust issues [140]: Most of the IaaS and PaaS providers are not EU-based, hosting their
services outside of Europe with EU SMEs and Startups adopting such services are required to store and handle
sensitive customer data outside of EU legal jurisdiction; and (iii) security concerns: Deploying confidential
information and critical IT resources in the cloud raises concerns about vulnerabilities and attacks, especially
because of the anonymous, multi-tenant nature of the cloud [141]. CAS aims at integrating the results of the
Unicorn project to CyberForum IT environment and delivering a cloud app marketplace ecosystem for SME
business applications referring to project management, work scheduling and contact management. Therefore,
from a technical perspective there are different important aspects of the cloud app marketplace which will be
enriched by the functionalities provided by Unicorn. In this frame, the following technical and business
challenges are tackled.

Technical Challenge 1: Flexible Deployment. Developed microservices within the CyberForum Cloud App
Marketplace are based on the underlying xRM platform provided by CAS. To deploy these microservices into the
cloud there exist different deployment possibilities from a full-service deployment at one dedicated cloud
offering till a multicloud deployment with different services on different clouds and data exchange and
communication between both services. At this point, the (multi-)cloud orchestration and automatic deployment
functionality of the Unicorn framework in combination with the expected vendor lock-in overcome extends the
envisioned cloud app marketplace.

Technical Challenge 2: Application Monitoring. Performance issues and incorrect cloud application behaviour
can occur due to some software bugs or depend on an unfitting deployment environment. Therefore, the
observation of running applications helps on the one hand to improve the cloud application functionalities and
to identify runtime issues, on the other hand security and data privacy breaches are detected. The monitoring
component of the Unicorn framework will enhance the cloud app marketplace such that CyberForum members
are able to monitor performance, security constraints and behaviour of UNCORN-compliant deployed cloud
applications during their full runtime lifecycle and get real-time notifications about detected security incidents.

Business Challenge 1: Exploding costs and effort. One main issue regarding the development and/or migration
of services into the cloud are exploding costs and efforts for SMEs and Startups being members of CyberForum.
Supporting cloud application developers and administrators being non-experts by delivering security and privacy

94
D1.2 Unicorn Reference Architecture

aware cloud applications and through the whole application packaging and deployment cycle reduces necessary
costs and efforts for migrating services into the cloud.

Business Challenge 2: Trust issues. As already mentioned trust issues are one main obstacle with respect to cloud
services. To overcome them, security and privacy principles need to be integrated and proved. The CyberForum
cloud app marketplace includes Unicorn security and data privacy mechanism into the cloud application lifecycle
and is, therefore, able to reduce trust issues due to guaranteed IT security standards to application end users.

Business Challenge 3: Developer Support. Application Developers at SMEs/Start-ups are non-experts in the field
of IT security and data privacy while IT security and data privacy is one of the main trust obstacles in cloud
applications. To close this gap the cloud app marketplace in collaboration with the Unicorn framework enables
these developers to develop security and data privacy aware code through Unicorn security and data privacy
tools.

With the help of Unicorn, CAS expects that the CyberForum cloud app marketplace supports SMEs and Startups
during the whole lifecycle of cloud applications, e.g. development, running and maintaining. Developed cloud
applications include high complex security and privacy mechanisms to guarantee a certain level of data security
to customers which helps to overcome trust issues. Microservices can be deployed flexible on classical VMs or
using the unikernel approach, at will, on one dedicated cloud or as multicloud deployments taking advantage of
distributed resources. Monitoring of deployed applications enables cloud application developers and
administrators assisted by Unicorn to adapt deployed applications in real time. Overall, including Unicorn
functionalities in the CyberForum Cloud App Marketplace opens the cloud market to SMEs and StartUps due to
reduced development costs and needed efforts.

6.4.4 Demonstrator Relevance to Unicorn Use Cases


The importance and relevance of the Unicorn Use Cases for the Cyber-Forum Cloud Platform Demonstrator is
presented in the following table.

Table 34: Cyber-Forum Relevance to use cases

UC ID Use Case User Roles Relevance to


demonstrator
UC.1 Define runtime policies and constraints Application Developer High
UC.2 Develop Unicorn-enabled cloud applications Application Developer High
UC.4 Deploy Unicorn-compliant cloud application Application Admin High
UC.5 Manage the runtime lifecycle of a deployed Application Admin High
cloud application
UC.7 Manage security enforcement mechanisms Application Admin High
UC.8 Monitor application behaviour and performance Application Admin High
UC.10 Get real-time notifications about security Application Admin High
incidents and QoS guarantees
UC.11 Perform deployment assembly validation Application Tester High
UC.12 Perform security and benchmark tests Cloud Provider, Application High
Admin

95
D1.2 Unicorn Reference Architecture

UC.14 Search for fitting cloud provider offerings Application Product Manager High
UC.15 Define application placement conditions Application Developer High
UC.21 Ensure secure data migration across cloud sites Unicorn Developer High
and availability zones
UC.22 Ensure security and data privacy standards Cloud Provider High
UC.23 Monitor network traffic for abnormal or Cloud Provider High
intrusive behaviour.
UC.3 Package Unicorn-enabled cloud applications Application Product Manager Medium
UC.6 Manage privacy preserving mechanisms Application Admin Medium
UC.9 Adapt deployed cloud applications in real-time Application Admin Medium
UC.13 Manage cloud provider credentials Application Product Manager, Medium
Application Administrator
UC.18 Provide abstract description of programmable Unicorn Developer Medium
cloud execution environments through unified
API
UC.19 Develop and use orchestration tools for (multi- Unicorn Developer Medium
)cloud deployments
UC.26 Manage cloud application owners Unicorn Admin Medium
UC.16 Develop code annotation libraries Unicorn Developer Low
UC.17 Develop enablers enforcing policies via code Unicorn Developer Low
annotations
UC.20 Manage programmable infrastructure, service Cloud Provider Low
offerings and QoS
UC.24 Manage Unicorn core context model Unicorn Admin Low
UC.25 Manage enabler enforcing policies via Unicorn Unicorn Admin Low
code annotations

7 Implementation Aspects of Reference Architecture


In this chapter, we try to elaborate on the approach that will be followed in order to realise the functionalities
described in this document and to implement the components that constitute Unicorn framework and have
been introduced in chapter 5.

It has to be stated that the Unicorn framework will be realized with three major releases based on
implementation cycles. The first implementation cycle is going to be completed by the end of M18 with the
delivery of the first release of the integrated Unicorn platform. This release will be tested in technical and
functional terms, and the feedback is going to be provided at the second implementation cycle for further
refinement and improvements of the framework that will lead to the second release on M27. The final platform
(third release) will be delivered at the end of the project (M36) with slight improvements derived and imposed
from the demonstrators.

96
D1.2 Unicorn Reference Architecture

Figure 24: Major releases of Unicorn Integrated Framework

However, the plan depicted in Figure 24 reflects only the major releases of the platform that are imposed with
specific deadlines and milestones. The actual development of Unicorn components will be a continuous process
that imposes continuous the integration and testing of the developed components in order to assure quality
during the entire lifetime of the project.

This process that will be followed by the consortium can be represented as a virtual circle that contains the
following functional components a) source-code-versioning and management, b) continuous integration, c)
quality assurance of generated code, d) persistent storage of generated builds (a.k.a. artefacts) and e) issue/bug
tracking. The decision for this workflow has been decided and there may be changes in the implementation
aspects of the components during the project lifecycle, mature tools that support each step of this process have
been selected. These tools are depicted in Figure 25 below. More specifically these tools are: a) Git for source
code versioning, b) Jenkins for continuous integration, c) Sonar for code quality assurance, d) nexus for artefact-
management and e) GitHub for issue/bug tracking.

97
D1.2 Unicorn Reference Architecture

Figure 25: Development Lifecycle

In the following sections, we will briefly provide more information about these the selected tools and how these
help the consortium to have a continuous pipeline for developing, integrating and testing of Unicorn Framework.

7.1 Version Control System


A Version Control System (VCS) is a repository of files, often the files for the source code of computer programs,
with monitored access that tracks every change done in the filesystem, along with related metadata like date or
person that changed each file. Each file that is tracked can be reverted to previous versions, while the exact
changes in the file are usually available. Version control systems are essential for any form of distributed,
collaborative development, as they provide the ability to collaborate on the same files, the ability to track each
change that was made with great detail, and the ability to reverse changes when necessary.

Popular VCSs like CVS, SVN and Git were designed from the ground up to allow teams of programmers to work
on a project together on code repositories that are organized, and facilitates file updates, notation, and merging
capabilities.

In Unicorn, the consortium has selected Git as the primary VCS system, due to its speed, distributed nature,
branching capabilities, small size of the repository and the popularity of the online Git repository host and
management platform of GitHub. The Git repository is located at https://github.com/UBITECH/Unicorn. Access
to this repository limited to the consortium developers for the time being, but in later stages the consortium
can decide to make the whole platform or some of the components public.

7.2 Continuous Integration


Continuous Integration (CI) is a software development practice where the members of a team frequently
integrate their work usually each contributor integrates his software code at least daily, leading to multiple
integrations per day. Each integration cycle is verified by an automated build that includes testing in to detect

98
D1.2 Unicorn Reference Architecture

integration errors as quickly as possible. This approach has the great benefit of reduced risk in the integration
and therefore is a highly suggested practice on all distributed teams.

The selection of the consortium for CI is Jenkins [142], an open source tool written in Java, which runs in a servlet
container, such as Apache Tomcat or the GlassFish application server. It supports Version Control tools like CVS,
Subversion and Git and it can execute both Apache Ant and Apache Maven based projects, or even arbitrary
shell scripts and Windows batch commands.

7.3 Quality Assurance


In a project like Unicorn it is important to measure the quality of the developed software and the progresses in
the development, as it is a software developed by distributed teams that create different components. Even
though quality can be a subjective attribute, software structural quality characteristics have been clearly defined
by the Consortium for IT Software Quality (CISQ), an independent organization founded by the Software
Engineering Institute at Carnegie Mellon University. CISQ has defined 5 major characteristics of a piece of
software that should be taken under consideration for the quality of a software; Reliability, Efficiency, Security,
Maintainability, Size. These characteristics, among others, are very important and we will use SonarQube [143]
in order to perform analysis of code quality and monitor the available metrics. SonarQube is an open source
software quality platform that uses various static code analysis tools in order to extract software metrics, which
then can be used to improve software quality. Basic metrics include duplicated code, coding standards
compliance, unit tests coverage, code coverage, code complexity, identification of potential bugs by severity,
percentage of comments. SonarQube is easily integrated Jenkins continuous integration pipeline.

7.4 Release Planning and Artefact Management


The next step in the development lifecycle is the release planning and the management of the produced and
required artefacts. An artifact repository is a collection of binary software artifacts and metadata stored in a
defined directory structure and can be used by clients such Maven, Mercury, or Gradle to retrieve binaries during
a build process. The introduction of an artefact repository it is crucial for distributed teams following the CI
pipeline as it allows each new successful build to store the produced software components and make them
available for the deployment of further development of the integrated framework. The release management in
the Unicorn project will be accomplished with the help of Nexus Repository Manager [144], but it is also tightly
connected with the selected branching model.

In Unicorn, we will use Git that allows to work with branches easily and in a structured way, so that different
branches will help us to ensure the quality of the source code created and to decrease the number of failures.
As usual in Git there will be a master branch, and this will be parted into a development branch, a release branch
(that will be used for the major releases) and a possibly existing Hotfix-branch. Furthermore, separate branches
can be created per implemented feature. Upon the completion of each feature the feature branch is merged in
the development branch. Each commit that is performed in the development branch goes through the CI
pipeline and creates updated versions of the binaries that are hosted in the Nexus. Official releases will also go
through the CI and will be also hosted in Nexus release repository.

7.5 Issue Tracking


The last step of the development lifecycle is the issue/bug tracking, that requires a dedicated issue and bug
tracking system. An issue tracker should be reachable for every developing partner needs to be included to
collect development time issues like problem reports, feature requests, and work assignments. In the frame of

99
D1.2 Unicorn Reference Architecture

Unicorn, for issues concerning coding, features and distribution, the GitHub issue tracker is chosen. The
reporting is typically done by creating a new issue via the front end of the issue/bug tracker. The newly created
issue is picked by the responsible Unicorn developer.

100
D1.2 Unicorn Reference Architecture

8 Conclusions
The scope of this deliverable was to provide the Unicorn Reference Architecture. The Unicorn Reference
Architecture aims to satisfy the functional and non-functional requirements that have been formulated during
the requirements analysis and documented in Unicorn D1.1 [1]. More specifically, D1.1 highlighted specific
functional and non-functional requirements and identified the Unicorn actors that were required towards the
formulation the Unicorn framework.

By defining the Unicorn Reference Architecture, we achieved: a) to define the architectural components that
cover the functional aspects of the requirements b) to map the identified roles to the aforementioned
components and c) to elaborate on each component by providing a usage walkthrough. At this point it should
be clarified that the architecture is considered as reference since it can be subjected to multiple instantiations.
Furthermore, specific components can be implemented in a completely different way. In the frame of the
projects implementation phase a specific instantiation of the components will be performed which will be
tailored to the need of the use-cases.

In addition, we elaborated in more detail, technical decisions taken regarding technologies that will be used to
implement Unicorn aspects and components. We recognised the importance of container orchestration on
multi-cloud execution environments and identified the limitations that leading state-of-the-art projects such as
Kubernetes [5] and Docker Engine [10] suffer from. Even though Docker is the leading container technology and
Kubernetes the tool of preference (as the survey conducted for D1.1 suggests) for container orchestration, they
both lack the ability orchestrate container deployments on multiple cloud providers and/or availability regions
and are incapable to manage the underlying infrastructure. To this end, Unicorn will capitalize on the capabilities
of the Arcadia Smart Orchestrator, a tool designed to manage resources, and will be extended in order to provide
support for containerized environment. Furthermore, within the frame of Unicorn, contributions will be made
to the open-source community by providing an overlay network management extension for Kubernetes specially
designed for cross-cloud deployments.

After defining the Reference Architecture, demonstrator partners elaborated, as perceived in the initial stages
of the project implementation their demonstrator use cases. In particular, they described briefly the
functionalities and technical details of their use-case and also business and technical challenges. The result of
this process was to clarify the importance and relevance of the Unicorn Use Cases in order to demonstrate the
emerging, real-life need for the Unicorn framework.

Finally, the implementation plan and aspects of Unicorn were determined. Based on this plan, Unicorn
framework will be realized with three major releases based on implementation cycles, each one leading to an
improved version of the previous until we reach projects month 36, where the final release of Unicorn will be
launched. To assure the best quality throughout the project, a continuous integration and continuous delivery
pipeline was decided that includes a) source-code-versioning and management, b) continuous integration, c)
quality assurance of generated code, d) persistent storage of generated builds (a.k.a. artefacts) and e) issue/bug
tracking. The tools that have been selected to build this pipeline are as follows: a) Git for source code versioning,
b) Jenkins for continuous integration, c) Sonar for code quality assurance, d) nexus for artefact-management
and e) GitHub for issue/bug tracking.

101
D1.2 Unicorn Reference Architecture

9 References
[1] Unicorn, Unicorn Deliverable D1.1 Stakeholders Requirements Analysis. 2017.

[2] Eclipse Che Cloud IDE, http://www.eclipse.org/che/. .

[3] Docker, https://www.docker.com/.

[4] CoreOs Container Linux, https://coreos.com/os/docs/latest. .

[5] Kubernetes, http://kubernetes.io/. .

[6] Arcadia Orchestrator, http://www.arcadia-framework.eu/in-a-nutshell/what-is-arcadia/. .

[7] Direct Acyclic Graph, https://en.wikipedia.org/wiki/Directed_acyclic_graph. .

[8] OASIS TOSCA Committee, OASIS Topology and Orchestration Specification for Cloud Applications
(TOSCA). .

[9] Martin Fowler, Microservices - a definition of this new architectural term. 2014.

[10] D. Engine, https://docs.docker.com/engine/. .

[11] Lori MacVittie, Micorservices and Microsegmentation, 2015. .

[12] K. BROWN and B. WOOLF, Implementation Patterns for Microservices Architectures.

[13] Ten commandments of Microservices, https://thenewstack.io/ten-commandments-microservices/.

[14] The Principles of Microservices, http://shop.oreilly.com/product/0636920043935.do.

[15] The Twelve-Factor App, https://12factor.net/.

[16] N. R. Herbst, S. Kounev, and R. Reussner, Elasticity in Cloud Computing: What It Is, and What It Is Not.,
in ICAC, 2013, pp. 2327.

[17] N. Loulloudes, C. Sofokleous, D. Trihinas, M. D. Dikaiakos, and G. Pallis, Enabling Interoperable Cloud
Application Management through an Open Source Ecosystem, {IEEE} Internet Comput., vol. 19, no. 3,
pp. 5459, 2015.

[18] RightScale, State of the Cloud Report 2017, 2017.

[19] Reuven Cohen, Examining Cloud Compatibility, Portability and Interoperability, 2009. .

[20] OASIS CAMP Committee, OASIS Cloud Application Management for Platforms (CAMP). .

[21] T. Metsch, A. Edmonds, and B. Park, Open Cloud Computing Interface - Infrastructure, Stand. Track,
no. GFD-R Open Grid Forum Doc. Ser. Open Cloud Comput. Interface Work. Group, Muncie, p. 17, 2016.

[22] Open Grid Forum, https://www.ogf.org. .

[23] DMTF, Open Virtualization Format Specification, DMTF Virtualization Manag. VMAN Initiat., pp. 142,
2010.

[24] Distributed Management Task Force (DMTF), http://www.dmtf.org. .

[25] T. Beale, S. Heard, D. Lloyd, and D. Kalra, Common Information Model, DMTF Policy WG, pp. 7886,

102
D1.2 Unicorn Reference Architecture

2007.

[26] Apache JClouds, https://jclouds.apache.org/. .

[27] Apache LibClouds, https://libcloud.apache.org/index.html. .

[28] OpenNebula, https://opennebula.org/. .

[29] Z. Kozhirbayev and R. O. Sinnott, A performance comparison of container-based technologies for the
Cloud, Futur. Gener. Comput. Syst., vol. 68, pp. 175182, 2017.

[30] B. Di Martino, G. Cretella, and A. Esposito, Advances in Applications Portability and Services
Interoperability among Multiple Clouds, IEEE Cloud Comput., vol. 2, no. 2, pp. 2228, Mar. 2015.

[31] R. Morabito, J. Kjllman, and M. Komu, Hypervisors vs. Lightweight Virtualization: A Performance
Comparison, in 2015 IEEE International Conference on Cloud Engineering, 2015, pp. 386393.

[32] M. L. Massie, B. N. Chun, and D. E. Culler, The Ganglia Distributed Monitoring System: Design,
Implementation And Experience, Parallel Comput., vol. 30, 2003.

[33] nagios, https://www.nagios.org/. .

[34] I. Foster, Y. Zhao, I. Raicu, and S. Lu, Cloud Computing and Grid Computing 360-Degree Compared, in
Grid Computing Environments Workshop, 2008. GCE 08, 2008, pp. 110.

[35] Michael J. SKok, Breaking Down the Barriers to Cloud Adoption. 2014.

[36] AWS CloudWatch, https://aws.amazon.com/cloudwatch/. .

[37] Paraleap AzureWatch, https://www.paraleap.com/. .

[38] M. Rak, S. Venticinque, T. Mahr, G. Echevarria, and G. Esnal, Cloud Application Monitoring: The mOSAIC
Approach, in Cloud Computing Technology and Science (CloudCom), 2011 IEEE Third International
Conference on, 2011.

[39] Y. Al-Hazmi, K. Campowsky, and T. Magedanz, A monitoring system for federated clouds, in Cloud
Networking (CLOUDNET), 2012 IEEE 1st International Conference on, 2012, pp. 6874.

[40] Openstack Ceilometer, https://wiki.openstack.org/wiki/Telemetry. .

[41] J. M. Alcaraz Calero and J. Gutierrez Aguado, MonPaaS: An Adaptive Monitoring Platform as a Service
for Cloud Computing Infrastructures and Services, IEEE Trans. Serv. Comput., pp. 11, 2014.

[42] D. Trihinas, G. Pallis and M. D. Dikaiakos, Monitoring Elastically Adaptive Multi-Cloud Services, IEEE
Trans. Cloud Comput., vol. 4, 2016.

[43] New Relic APM, http://newrelic.com/application-monitoring. .

[44] DataDog, https://www.datadoghq.com. .

[45] AppDynamics, https://www.appdynamics.com/. .

[46] New Relic Overhead Discussions, https://discuss.newrelic.com/t/overhead-of-the-java-agent/13825. .

[47] Docker Stats, https://docs.docker.com/engine/reference/commandline/stats/. .

103
D1.2 Unicorn Reference Architecture

[48] cAdvisor, https://github.com/google/cadvisor. .

[49] scout monitoring, https://scoutapp.com/. .

[50] G. Galante and L. C. E. De Bona, A survey on cloud computing elasticity, in Proceedings - 2012 IEEE/ACM
5th International Conference on Utility and Cloud Computing, UCC 2012, 2012, pp. 263270.

[51] Amazon AWS Auto scaling, https://aws.amazon.com/autoscaling/. .

[52] Google Cloud Autoscaler, https://cloud.google.com/compute/docs/autoscaler/. .

[53] Microsoft Azure Auto Scaling, https://azure.microsoft.com/en-us/features/autoscale/. .

[54] Rackspace Auto-scale, https://www.rackspace.com/cloud/auto-scale. .

[55] Amazon ECS, https://aws.amazon.com/ecs/. .

[56] Amazon ECS Auto Scaling, http://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-


auto-scaling.html. .

[57] Google Container Engine, https://cloud.google.com/container-engine/. .

[58] A. Almeida, F. Dantas, E. Cavalcante, and T. Batista, A branch-and-bound algorithm for autonomic
adaptation of multi-cloud applications, Proc. - 14th IEEE/ACM Int. Symp. Clust. Cloud, Grid Comput.
CCGrid 2014, pp. 315323, 2014.

[59] R. Tolosana-Calasanz, J. . Baares, C. Pham, and O. F. Rana, Resource management for bursty streams
on multi-tenancy cloud environments, Futur. Gener. Comput. Syst., vol. 55, pp. 444459, 2016.

[60] S. Dustdar, Y. Guo, B. Satzger, and H.-L. Truong, Principles of elastic processes, IEEE Internet Comput.,
no. 5, pp. 6671, 2011.

[61] G. Copil, D. Moldovan, H. L. Truong, and S. Dustdar, SYBL: An extensible language for controlling
elasticity in cloud applications, Proc. - 13th IEEE/ACM Int. Symp. Clust. Cloud, Grid Comput. CCGrid 2013,
pp. 112119, 2013.

[62] D. Tsoumakos, I. Konstantinou, C. Boumpouka, S. Sioutas, and N. Koziris, Automated, Elastic Resource
Provisioning for NoSQL Clusters Using TIRAMOLA, IEEE Int. Symp. Clust. Comput. Grid, vol. 0, pp. 3441,
2013.

[63] A. Naskos et al., Dependable horizontal scaling based on probabilistic model checking, Proc. - 2015
IEEE/ACM 15th Int. Symp. Clust. Cloud, Grid Comput. CCGrid 2015, pp. 3140, 2015.

[64] D. Bermbach, T. Kurze, and S. Tai, Cloud Federation: Effects of federated compute resources on quality
of service and cost, in Proceedings of the IEEE International Conference on Cloud Engineering, IC2E 2013,
2013, pp. 3137.

[65] P. Kondikoppa, C.-H. Chiu, and S.-J. Park, MapReduce Performance in Federated Cloud Computing
Environments, in High Performance Cloud Auditing and Applications, K. J. Han, B.-Y. Choi, and S. Song,
Eds. New York, NY: Springer New York, 2014, pp. 301322.

[66] N. Ferry, G. Brataas, A. Rossini, F. Chauvel, and A. Solberg, Towards Bridging the Gap Between Scalability
and Elasticity., in CLOSER, 2014, pp. 746751.

[67] N. Ferry, H. Song, A. Rossini, F. Chauvel, and A. Solberg, CloudMF: applying MDE to tame the complexity

104
D1.2 Unicorn Reference Architecture

of managing multi-cloud applications, in Utility and Cloud Computing (UCC), 2014 IEEE/ACM 7th
International Conference on, 2014, pp. 269277.

[68] L. Jiao, J. Lit, W. Du, and X. Fu, Multi-objective data placement for multi-cloud socially aware services,
in INFOCOM, 2014 Proceedings IEEE, 2014, pp. 2836.

[69] S. Garcia-Gmez et al., 4CaaSt: Comprehensive management of Cloud services through a PaaS, in
Parallel and Distributed Processing with Applications (ISPA), 2012 IEEE 10th International Symposium on,
2012, pp. 494499.

[70] G. Copil, D. Moldovan, H. L. Truong, and S. Dustdar, SYBL+MELA: Specifying, monitoring, and controlling
elasticity of cloud services, in Lecture Notes in Computer Science (including subseries Lecture Notes in
Artificial Intelligence and Lecture Notes in Bioinformatics), 2013, vol. 8274 LNCS, pp. 679682.

[71] Y. Verginadis, A. Michalas, P. Gouvas, G. Schiefer, G. H??bsch, and I. Paraskakis, PaaSword: A Holistic
Data Privacy and Security by Design Framework for Cloud Services, J. Grid Comput., no. March, pp. 219
234, 2017.

[72] Cloud Security Alliance, https://cloudsecurityalliance.org/. .

[73] Secure Database Adapter, https://www.paasword.eu/results/secure-database-adapter-with-


distribution-encryption-and-query-synthesis/. .

[74] PaasWord Context-Aware Security Model, https://www.paasword.eu/results/context-aware-security-


model/. .

[75] OASIS, OASIS eXtensible Access Control Markup Language (XACML) TC.

[76] Drools, https://www.drools.org/. .

[77] C. Modi, D. Patel, B. Borisaniya, H. Patel, A. Patel, and M. Rajarajan, A survey of intrusion detection
techniques in Cloud, Journal of Network and Computer Applications, vol. 36, no. 1. pp. 4257, 2013.

[78] Y. Tayyeb and D. S. Bhilare, Cloud security through Intrusion Detection System (IDS): Review of Existing
Solutions, Int. J. Emerg. Trends Technol. Comput. Sci., vol. 4, no. 6, pp. 213215, 2015.

[79] F. Karniavoura and K. Magoutis, A measurement-based approach to performance prediction in NoSQL


systems, 25th IEEE Int. Symp. Model. Anal. Simul. Comput. Telecommun. Syst. (MASCOTS 2017), pp. 20
22, 2017.

[80] S. Antonatos, K. G. Anagnostakis, and E. P. Markatos, Generating realistic workloads for network
intrusion detection systems, ACM SIGSOFT Softw. Eng. Notes, vol. 29, no. 1, p. 207, 2004.

[81] B. D. Cabrera, J. Gosar, W. Lee, R. K. Mehra, A. Drive, and W. C. Park, On the Statistical Distribution of
Processing Times in Network Intrusion Detection, 43rd IEEE Conf. Decis. Control. Dec, no. December, pp.
16, 2004.

[82] G. Vasiliadis, S. Antonatos, M. Polychronakis, E. P. Markatos, and S. Ioannidis, Gnort: High Performance
Network Intrusion Detection Using Graphics Processors, in Proceedings of the 11th International
Symposium on Recent Advances in Intrusion Detection, 2008, pp. 116134.

[83] D. L. Cook, J. Ioannidis, and J. Luck, Secret Key Cryptography Using Graphics Cards, Organization, 2004.

[84] L. Marziale, G. G. Richard III, and V. Roussev, Massive Threading: Using GPUs to Increase the

105
D1.2 Unicorn Reference Architecture

Performance of Digital Forensics Tools, Digit. Investig., vol. 4, pp. 7381, Sep. 2007.

[85] F. Yu, R. H. Katz, and T. V. Lakshman, Gigabit rate packet pattern-matching using TCAM, in Proceedings
- International Conference on Network Protocols, ICNP, 2004, pp. 174183.

[86] S. Yusuf and W. Luk, Bitwise optimised cam for network intrusion detection systems, in Proceedings -
2005 International Conference on Field Programmable Logic and Applications, FPL, 2005, vol. 2005, pp.
444449.

[87] R. Sidhu and V. Prasanna, Fast regular expression matching using FPGAs, Field-Programmable Cust.
Comput. Mach. 2001. FCCM 01. 9th Annu. IEEE Symp., pp. 227238, 2001.

[88] H. C. Li, P. H. Liang, J. M. Yang, and S. J. Chen, Analysis on cloud-based security vulnerability assessment,
in Proceedings - IEEE International Conference on E-Business Engineering, ICEBE 2010, 2010, pp. 490494.

[89] S. Ristov, M. Gusev, and A. Donevski, OpenStack Cloud Security Vulnerabilities from Inside and Outside,
CLOUD Comput. 2013 Fourth Int. Conf. Cloud Comput. GRIDs, Virtualization OpenStack, no. c, pp. 101
107, 2013.

[90] E. Kirda, A security analysis of Amazons Elastic Compute Cloud service, IEEE/IFIP Int. Conf. Dependable
Syst. Networks Work. (DSN 2012), pp. 11, 2012.

[91] R. Schwarzkopf, M. Schmidt, C. Strack, S. Martin, and B. Freisleben, Increasing virtual machine security
in cloud environments, J. Cloud Comput. Adv. Syst. Appl., vol. 1, no. 1, p. 12, 2012.

[92] A. Donevski, S. Ristov, and M. Gusev, Nessus or Metasploit: Security Assessment of OpenStack Cloud,
in The 10th Conference for Informatics and Information Technology (CIIT 2013), 2013, no. Ciit, pp. 269
273.

[93] Nolle et al., Continuous integration and deployment with containers. 2015.

[94] Chris Tozzi et al., The benefits of container development. 2015.

[95] Linux-Vserver, http://linux-vserver.org/Welcome_to_Linux-VServer.org. .

[96] OpenVZ, https://openvz.org/Main_Page.

[97] Oracle Solaris Zones,


https://docs.oracle.com/cd/E18440_01/doc.111/e18415/chapter_zones.htm#OPCUG426. .

[98] BSD Jails, https://www.freebsd.org/doc/handbook/jails.html. .

[99] Linux Containers. [Online]. Available: https://linuxcontainers.org/.

[100] Docker libcontainer unifies Linux container powers, http://www.zdnet.com/article/docker-libcontainer-


unifies-linux-container-powers/. .

[101] Rkt, https://coreos.com/rkt.

[102] LXC/LXD Linux Containers, https://linuxcontainers.org/. .

[103] CloudFoundry Warden/Garden, https://content.pivotal.io/blog/cloud-foundrys-container-technology-


a-garden-overview.

[104] http://searchitoperations.techtarget.com/tip/When-to-use-Docker-alternatives-rkt-and-LXD. .

106
D1.2 Unicorn Reference Architecture

[105] Docker vs CoreOS Rkt, https://www.upguard.com/articles/docker-vs-coreos. .

[106] Docker Swarm, https://docs.docker.com/engine/swarm/swarm-tutorial/. .

[107] Apache Mesos, http://mesos.apache.org/. .

[108] ISO/IEC 25010:2011, https://www.iso.org/standard/35733.html. .

[109] Eclipse Che Cloud IDE, https://eclipse.org/che. .

[110] Dockerfile, https://docs.docker.com/engine/reference/builder/. .

[111] GWT Project, http://www.gwtproject.org/. .

[112] Orion Editor, https://orionhub.org/. .

[113] CAMF Eclipse Project, https://projects.eclipse.org/proposals/cloud-application-management-


framework. .

[114] Eclipse RCP, https://wiki.eclipse.org/Rich_Client_Platform. .

[115] Ubuntu Core, https://www.ubuntu.com/core. .

[116] PaasWord Security Policy Models, https://www.paasword.eu/results/paasword-policy-models/. .

[117] OASIS, TOSCA Simple Profile for Network Functions Virtualization (NFV) Version 1.0. .

[118] Netflix, https://www.netflix.com/. .

[119] Spotify, https://www.spotify.com. .

[120] Java Community Process, https://www.jcp.org/en/jsr/detail?id=308. .

[121] W3C, W3C XML Schema Definition Language (XSD).

[122] Arcadia Context Model, http://www.arcadia-framework.eu/documentation/context-model/. .

[123] Production System, https://en.wikipedia.org/wiki/Production_system_(computer_science). .

[124] Elasticsearch, https://www.elastic.co/. .

[125] CouchBase, https://www.couchbase.com/. .

[126] Vue.js, https://vuejs.org/. .

[127] Chart.js, http://www.chartjs.org/. .

[128] Kibana, https://www.elastic.co/products/kibana. .

[129] uMMORPG, https://ummorpg.net/documentation/. .

[130] ProsocialLearn Project, http://prosociallearn.eu/. .

[131] Unity 3D Engine, https://unity3d.com/. .

[132] Nginx, https://www.nginx.com/. .

107
D1.2 Unicorn Reference Architecture

[133] WebGL, https://get.webgl.org/. .

[134] Nodejs, https://nodejs.org/en/. .

[135] Expressjs, https://expressjs.com/. .

[136] Peerjs, http://peerjs.com/. .

[137] Software Cluster, http://software-cluster.com. .

[138] Business Accelerator, http://c55bra.com/. .

[139] KPMG Cloud Monitor, http://www.kpmg.com/DE/de/Documents/cloudmonitor-2014-kpmg.pdf. .

[140] Cloud in Europe: Uptake, Benefits, Barriers, and Market Estimates,


http://cordis.europa.eu/fp7/ict/ssai/docs/study45-workshop-bradshaw-pres.pdf. .

[141] CAS, The Need for Cloud Computing Security. A Trend Micro White Paper. July 2010. .

[142] Jenkins, https://jenkins.io/. .

[143] SonarQube, https://www.sonarqube.org/.

[144] Sonatype Nexus, https://www.sonatype.com/nexus-repository-sonatype.

108