Sie sind auf Seite 1von 4

Clustering Unikernel Microservices in Networking

Myungho Jung
myungho.jung@utah.edu
University of Utah

I.

INTRODUCTION

As the scale of web services is growing, Microservices architecture is becoming a trend to develop
large-scale web applications. Web services consists
of a lot of more scalable, less coupling small applications that can be combined into a large system.
As the Microservices becomes popular, the compatibility between platforms is one of the crucial factors in deciding the quality of the application. There
are several ways to deploy applications as Microservices. First, an application itself can be deployed on a
host. Although it is straightforward and efficient when
it comes to dealing with resources, it is not optimal
when it comes to portability. In other words, it would
not work if the platform or operating system of host is
different from developed environment.
One of the solution is using container like Docker.
Container consists of required libraries and applications. The advantage of the container is fast and compatible on the same sort of kernel. However, it shares
the kernel with guest OS which would be vulnerable
to attacks aiming host kernel.
Running applications on virtual machine can also
be a solution. The difference from the container is
that each VM runs on separate kernel unlike the containers sharing kernel. By doing so, it would be securer from kernel attacks at the cost of performance.
Unikernel is a new approach that merges the advantages of container and virtual machine. It is basically
based on the virtual machine but it is packaged with
the only required frameworks and libraries. Therefore, it minimizes the attack surface by excluding unnecessary parts such as terminal input or ssh. Although this may make difficult to debug and test applications, it will make safer than containers and other
external tools will help the development.
In this project, I focused on the clustering of unikernel Microservices. Although unikernel is secure and
lightweight, defining networks for a bunch of unikernel application in a host is not an easy work. To simplify this, I suggested a way to abstract the internal
networks of unikernels working closely. It converts
multiple unikernel instances into a host in network
layer. For deployment of the service to hosts, python
script and a configuration file are necessary to replicate the Openflow[2] network. The flows not only defines which guest OS is accessible from outside, but

also offers secure networking inside.

II.

MOTIVATION

In a nutshell, the objective of the project is to abstract the internal networking structure and to make
a group of unikernels look a host in the higher level of
networking. Microservices using unikernel is secure
and easy to deploy. However, it sometimes involves
a lot of work to integrate or duplicate web applications. Heres an example of the real world problem. In

FIG. 1. Example of integrating services

FIG. 1, assume that web services that are developed


in different company or host. The problem is that it
requires to set up a new network structure and rebuild virtual machine images including IP address in
application code. Although it would be simple in this
example, more complicated and long process would
be involved as the number of applications in a host
increases.

FIG. 2. Pre-configured VM images for Web service

Lets start from simpler example. As seen in FIG.


2, three virtual machines, which are nginx, PHP, and
MySQL servers, are running. It would make the problem clear. Assume that an administrator of a host
wants to run multiple services using the images. We

2
can dynamically change the IP of each instance by setting option parameters. The problem is caused by the
fixed IP addresses in web application code and server
configuration file. In this case, nginx image includes
the IP addresses of PHP and MySQL servers and we
should rebuild the image to modify them. However, tt
is not always necessary to modify the code. The FIG. 3
shows no need to rebuild images for serving multiple
services. In this case, it is enough to change the IP of

FIG. 5. Sharing a service from different hosts

III.

FIG. 3. A case of no IP confliction between services

nginx servers to avoid IP confliction. However, it may


not much secure because every user should share PHP
and MySQL servers. Thus, each user would ask to
create separate VMs from others. To avoid IP confliction and run a virtual machine per user, Linux network
namespace helps to accomplish it. The solution is creating a network name space for each user and running
instances inside of it in FIG. 4. Then, each virtual machine that have the same IP can run on a host if layer 2
networking is carefully handled. In some cases, multi-

FIG. 4. An example requiring network namespaces to avoid


IP confliction

RELATED WORKS

Unikernel is still in early stage compared to containers and thus, there are few project on clustering
unikernel. On the other hand, microservices on containers are being used for production. There are many
project related to clustering containers. One of the
most popular projects is Kubernetes[7]. It defines a
group of containers working together as a pod. Its a
large-scale project working on scaling, scheduling and
orchestration of containers. In addition, Tectonic[5]
is a project from CentOS for cluster management of
containers. It supports docker and CentOS Rocker
containers. Docker project is also working on Docker
Swarm[6] to cluster containers using docker API. On
top of the clustering, Apache Mesos[4] is dealing with
scaling and automated deployment. Amazon EC2[9]
is supporting the management and clustering of containers on the cloud platform. Containers are required
to replicated for scaling and load balancing. It is an
important factor for Microservices.
In spite of the popularity and maturity of containers,
unikernel is emerging as a new platform for Microservices. Docker which is one of the most famous container already decided to support unikernel. Unikernel Microservices will make the system securer by isolating the kennel of instances from each other.

IV.
A.

ple Microservices are required to share resources and


to work together. In FIG.5, the MySQL server would
not be able to generate ARP table for each php server
if the IP of two servers is the same. So, we need to
allocate a virtual IP for each group of virtual instaces.
By doing so, we dont have to care about IP confliction,
but also different subnet from that of virtual machines
can be used in host OS.

SECURITY
Threat Model

Unikernel is rather secure since it runs on separate kernel from others. Therefore, even if the kernel is compromised by an attacker, it is hard to attack the next virtual machine directly. Nevertheless,
we still need to carefully deal with packet flow between virtual machines. Instead of setting firewalls
for each guest OS, we can utilize Openflow for allow-

3
ing or block path between two hosts. In FIG. 6, PHP

flow rules in network layer 2 and 3 should be carefully


managed.

B.

FIG. 6. Packet flows allowed and blocked

server needs to access MySQL to retrieve data. However, the nginx server doesnt need to directly access
the database. If the path is allowed, an attacker may
be able to try SQL injection attack through the nginx
server.
One of the drawbacks for unikernel Microservice
is that it is hard to run rootkit or virus vaccine because virtual machines have limited amount of CPU
and memory. In addition, if there are tens or hundreds
of instances are running on a host, monitoring packets on each host would cost a lot of resources. Fortunately, the booting time for unikernel is extremely
short and thus, it would be better to reboot if the
kernel is compromised or an attack is detected. To
do this, Intrusion Detection System(IDS) connected
to Software Defined Networks(SDN) controller would
help to monitor packets from outside. When it detects
a suspicious packets to a VM, we can choose an action to handle it such as, rebooting the VM, checking
by rootkit tool, or rolling back data.

V.

Open vSwitch

Open vSwitch provides bridges for virtual machines. It is working in layer 2 network, however,
layer 3 network can also be controllable with Openflow protocol[2]. In this project, there is a central
bridge and it is connected to other bridges for each
group of virtual machines. Even if the bridges are
connected directly, we can generate any networking
topology by managing Openflow.
Bridges for each group of unikernels should be directly connected the central bridge, which means every bridge can access each other without Openflow.
However, thanks to the Openflow, we can create any
network topology by defining flows between bridges.
FIG. 7 shows possible topologies when 4 bridges are
connected to a central bridge.

IMPLEMENTATION

Rump kernel[3], Openvswitch[1], Floodlight SDN


controller[8], and python scripts are used for constructing network. After booting rump kernels with
openvswitch bridges, openflows are added for layer
2 and 3 networking. This process is automated by
python script with json configuration file.

A.

Rump Kernels

Rumpkernel is one of the unikernel project using


NetBSD kernel. The advantage is that the kernel is
rather stable and light. In general, the booting time
is in seconds and thus, it is much faster than general VMs. In security, it may secure than containers
like Docker because guest OS doesnt share the kernel with Domain 0.
One of the challenges in implementation is that
some guest OS should have two IP addresses for internal and external networking. In this project, I decided to add an additional network interface to avoid
changing routing table in guest OS. In this case, two
virtual machines may have the same external IP. Thus,

FIG. 7. Examples of network topologies

C.

Floodlight

Floodlight is a SDN controller that can push static


flows on a bridge. It also helps to monitor traffic on
network and manage flows. We can connect Intrusion

4
"name": "php",
"mac": "00:00:00:00:00:02",
"ip": "10.0.0.11",
"tcp-port": "8000",
"bridge-port": "6"

Detection System to the controller in order to monitor


packets and decide a host that should reboot if the
kernel is compromised.

D.

Deployment

},
{

it is difficult to manage flows on a lot of bridges,


even if a SDN controller is used. To automate the process, a python script with a configuration file is developed for this project. As a result, only necessary files
for the deployment are virtual machines and a configuration file.
The deployment process is simple. First, create
bridges for each group of virtual machines and connect time to the central bridge to make accessible. After that, add extra network interfaces using rumprun
parameter to instances that should be access to others in different bridges. And modify the configuration
file according to the settings on bridges. Finally, executing script with the configuration file.

"name": "mysql",
"mac": "00:00:00:00:00:03",
"ip": "10.0.0.12",
"tcp-port": "3306",
"bridge-port": "7"
},
],
"internal-flows": [
{
"from":"nginx",
"to":"php"
},
{
"from":"php",
"to":"mysql"
}
],
"external-ports": [
{
"src-ip": "192.168.1.0/24",
"to": "10.0.0.10"
}
]

Listing 1. An example of json configuration file

{
"name": "wordpress-user1",
"birdge": "br-wp-user1",
"external-ip": "192.168.1.10",
"hosts": [
{
"name": "nginx",
"mac": "00:00:00:00:00:01",
"ip": "10.0.0.10",
"tcp-port": "80",
"bridge-port": "5"
},
{

[1] Open vswitch. http://openvswitch.org/. Accessed:


2016-04-02.
[2] Openflow. http://archive.openflow.org/. Accessed:
2016-04-02.
[3] Rump kernels. http://rumpkernel.org/. Accessed:
2016-04-02.
[4] Apache. Apache mesos. http://mesos.apache.org/.
Accessed: 2016-04-02.
[5] CentOS. Coreos tectonic. https://tectonic.com. Accessed: 2016-04-02.
[6] Docker. Docker swarm. https://docs.docker.com/
swarm/. Accessed: 2016-04-02.
[7] Google. Kubernetes. http://kubernetes.io. Accessed:
2016-04-02.
[8] A Big Switch Network. Project floodlight. http://www.
projectfloodlight.org/. Accessed: 2016-04-02.
[9] Amazon Web Service. Amazon ec2 container service.

}
The python script parses the json file and push static
flows to bridges. Even if this process will be enough
to implement with Open vSwitch CLI, Floodlight REST
API is used to maintain flows with name.

https://aws.amazon.com/ecs/. Accessed: 2016-04-02.

Das könnte Ihnen auch gefallen