Beruflich Dokumente
Kultur Dokumente
Myungho Jung
myungho.jung@utah.edu
University of Utah
I.
INTRODUCTION
As the scale of web services is growing, Microservices architecture is becoming a trend to develop
large-scale web applications. Web services consists
of a lot of more scalable, less coupling small applications that can be combined into a large system.
As the Microservices becomes popular, the compatibility between platforms is one of the crucial factors in deciding the quality of the application. There
are several ways to deploy applications as Microservices. First, an application itself can be deployed on a
host. Although it is straightforward and efficient when
it comes to dealing with resources, it is not optimal
when it comes to portability. In other words, it would
not work if the platform or operating system of host is
different from developed environment.
One of the solution is using container like Docker.
Container consists of required libraries and applications. The advantage of the container is fast and compatible on the same sort of kernel. However, it shares
the kernel with guest OS which would be vulnerable
to attacks aiming host kernel.
Running applications on virtual machine can also
be a solution. The difference from the container is
that each VM runs on separate kernel unlike the containers sharing kernel. By doing so, it would be securer from kernel attacks at the cost of performance.
Unikernel is a new approach that merges the advantages of container and virtual machine. It is basically
based on the virtual machine but it is packaged with
the only required frameworks and libraries. Therefore, it minimizes the attack surface by excluding unnecessary parts such as terminal input or ssh. Although this may make difficult to debug and test applications, it will make safer than containers and other
external tools will help the development.
In this project, I focused on the clustering of unikernel Microservices. Although unikernel is secure and
lightweight, defining networks for a bunch of unikernel application in a host is not an easy work. To simplify this, I suggested a way to abstract the internal
networks of unikernels working closely. It converts
multiple unikernel instances into a host in network
layer. For deployment of the service to hosts, python
script and a configuration file are necessary to replicate the Openflow[2] network. The flows not only defines which guest OS is accessible from outside, but
II.
MOTIVATION
In a nutshell, the objective of the project is to abstract the internal networking structure and to make
a group of unikernels look a host in the higher level of
networking. Microservices using unikernel is secure
and easy to deploy. However, it sometimes involves
a lot of work to integrate or duplicate web applications. Heres an example of the real world problem. In
2
can dynamically change the IP of each instance by setting option parameters. The problem is caused by the
fixed IP addresses in web application code and server
configuration file. In this case, nginx image includes
the IP addresses of PHP and MySQL servers and we
should rebuild the image to modify them. However, tt
is not always necessary to modify the code. The FIG. 3
shows no need to rebuild images for serving multiple
services. In this case, it is enough to change the IP of
III.
RELATED WORKS
Unikernel is still in early stage compared to containers and thus, there are few project on clustering
unikernel. On the other hand, microservices on containers are being used for production. There are many
project related to clustering containers. One of the
most popular projects is Kubernetes[7]. It defines a
group of containers working together as a pod. Its a
large-scale project working on scaling, scheduling and
orchestration of containers. In addition, Tectonic[5]
is a project from CentOS for cluster management of
containers. It supports docker and CentOS Rocker
containers. Docker project is also working on Docker
Swarm[6] to cluster containers using docker API. On
top of the clustering, Apache Mesos[4] is dealing with
scaling and automated deployment. Amazon EC2[9]
is supporting the management and clustering of containers on the cloud platform. Containers are required
to replicated for scaling and load balancing. It is an
important factor for Microservices.
In spite of the popularity and maturity of containers,
unikernel is emerging as a new platform for Microservices. Docker which is one of the most famous container already decided to support unikernel. Unikernel Microservices will make the system securer by isolating the kennel of instances from each other.
IV.
A.
SECURITY
Threat Model
Unikernel is rather secure since it runs on separate kernel from others. Therefore, even if the kernel is compromised by an attacker, it is hard to attack the next virtual machine directly. Nevertheless,
we still need to carefully deal with packet flow between virtual machines. Instead of setting firewalls
for each guest OS, we can utilize Openflow for allow-
3
ing or block path between two hosts. In FIG. 6, PHP
B.
server needs to access MySQL to retrieve data. However, the nginx server doesnt need to directly access
the database. If the path is allowed, an attacker may
be able to try SQL injection attack through the nginx
server.
One of the drawbacks for unikernel Microservice
is that it is hard to run rootkit or virus vaccine because virtual machines have limited amount of CPU
and memory. In addition, if there are tens or hundreds
of instances are running on a host, monitoring packets on each host would cost a lot of resources. Fortunately, the booting time for unikernel is extremely
short and thus, it would be better to reboot if the
kernel is compromised or an attack is detected. To
do this, Intrusion Detection System(IDS) connected
to Software Defined Networks(SDN) controller would
help to monitor packets from outside. When it detects
a suspicious packets to a VM, we can choose an action to handle it such as, rebooting the VM, checking
by rootkit tool, or rolling back data.
V.
Open vSwitch
Open vSwitch provides bridges for virtual machines. It is working in layer 2 network, however,
layer 3 network can also be controllable with Openflow protocol[2]. In this project, there is a central
bridge and it is connected to other bridges for each
group of virtual machines. Even if the bridges are
connected directly, we can generate any networking
topology by managing Openflow.
Bridges for each group of unikernels should be directly connected the central bridge, which means every bridge can access each other without Openflow.
However, thanks to the Openflow, we can create any
network topology by defining flows between bridges.
FIG. 7 shows possible topologies when 4 bridges are
connected to a central bridge.
IMPLEMENTATION
A.
Rump Kernels
C.
Floodlight
4
"name": "php",
"mac": "00:00:00:00:00:02",
"ip": "10.0.0.11",
"tcp-port": "8000",
"bridge-port": "6"
D.
Deployment
},
{
"name": "mysql",
"mac": "00:00:00:00:00:03",
"ip": "10.0.0.12",
"tcp-port": "3306",
"bridge-port": "7"
},
],
"internal-flows": [
{
"from":"nginx",
"to":"php"
},
{
"from":"php",
"to":"mysql"
}
],
"external-ports": [
{
"src-ip": "192.168.1.0/24",
"to": "10.0.0.10"
}
]
{
"name": "wordpress-user1",
"birdge": "br-wp-user1",
"external-ip": "192.168.1.10",
"hosts": [
{
"name": "nginx",
"mac": "00:00:00:00:00:01",
"ip": "10.0.0.10",
"tcp-port": "80",
"bridge-port": "5"
},
{
}
The python script parses the json file and push static
flows to bridges. Even if this process will be enough
to implement with Open vSwitch CLI, Floodlight REST
API is used to maintain flows with name.