Sie sind auf Seite 1von 30

Accepted Manuscript

Performance Comparison Of Scalable Rest Application Programming


Interfaces In Different Platforms

Erdem KEMER , Ruya SAMLI

PII: S0920-5489(18)30405-7
DOI: https://doi.org/10.1016/j.csi.2019.05.001
Reference: CSI 3355

To appear in: Computer Standards & Interfaces

Received date: 12 November 2018


Revised date: 14 April 2019
Accepted date: 6 May 2019

Please cite this article as: Erdem KEMER , Ruya SAMLI , Performance Comparison Of Scalable Rest
Application Programming Interfaces In Different Platforms, Computer Standards & Interfaces (2019),
doi: https://doi.org/10.1016/j.csi.2019.05.001

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service
to our customers we are providing this early version of the manuscript. The manuscript will undergo
copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please
note that during the production process errors may be discovered which could affect the content, and
all legal disclaimers that apply to the journal pertain.
ACCEPTED MANUSCRIPT

Highlights

 WebAPI applications developed with different programming languages are


compared with each other regardless of other factors.
 There are rich comparsions about performance of applications.
 A large literature research was presented about APIs.
 k6 and Locust testing model were both applied to the system.

T
 It is the first time that such a wide paper is written about WebAPIs

IP
performance.

CR
US
AN
M
ED
PT
CE
AC

1
ACCEPTED MANUSCRIPT

Performance Comparison Of Scalable Rest Application Programming Interfaces In Different Platforms

Erdem KEMER1,2, Ruya SAMLI1*


1
Istanbul University – Cerrahpasa, Engineering Faculty, Computer Engineering Department, 34320
Avcilar/Istanbul, Turkey
2
Amazon Web Services, Dublin, Ireland
* E-mail: ruyasamli@istanbul.edu.tr, Phone: +90 212 470 70 70

T
Abstract:
The number of devices connecting to the Internet is increasing day by day. Firstly, people connected the Internet

IP
using their personal computers with the purpose of gathering information, then Internet became more accessible
with the increased popularity of portable computers and mobile phones. Today, smart home management

CR
systems and many other devices have become interconnected over the Internet. As the result of these devices
being used in every aspect of people’s lives, they can accomplish many of daily tasks, easier than ever. Internet
connected devices are generally called, ―Internet of Things (IOT)‖. As IOT become more and more popular, web

US
application programming interfaces (WebAPIs) are now more important than ever. Today, all these devices,
systems and modules are mostly integrated over WebAPIs, so performance and scalability features of the
AN
WebAPIs are getting very important. WebAPIs can be developed with many different programming languages
using different frameworks and environments. In this paper, sample prototype RestAPIs which is a special
WebAPI class will be developed using different programming languages and environments and the performances
M

of these RestAPIs will be compared against each other with load tests and stress tests.

Keywords: WebAPI, RestAPI, Internet of Things, Performance Analysis, Load Test, Stress Test
ED

1. INTRODUCTION
PT

The recent challenges that businesses face today are changing their technical infrastructure and we need loosely
coupled components that will work in heterogeneous environments more than ever. Service Oriented
Architecture (SOA) represents an approach that facilitates this loose link and also provides adequate service
CE

quality for acceptable solutions [1]. It is a fundamental building block for enterprise Information Technologies
(IT) infrastructures and is the most widely used approach for managing business processes by managing the
necessities. According to some researches, most of the enterprises have existing solutions on SOA, while others
AC

are planning to address the issue in the near future. The key component of the SOA, which is an important issue
for companies, is its being a self-contained, self-identifying, loosely coupled, reusable software component [2].
Web service concept is one of the most important approaches to fulfill the necessities of the information-
processing-world. Web service, in its most general definition, is web a computer program with an interface,
using standard web protocols [3]. In another definition, a web service is referred to as, a software system
designed to support machine-machine interaction that can work together on a network. In this paper, different
types of scalable RestAPIs which is a special WebAPI class were developed in different platforms and
performances were compared. The rest of the study is as follows: in section 2, the background of APIs, WebAPIs
and RestAPIs were explained; in section 3, materials and methods such as tests, performance agents and

2
ACCEPTED MANUSCRIPT

programming languages and environments for this paper was given; in section 4, the application of developing
APIs were presented and the performances were compared; and in the last section, the comparisons were
evaluated, findings and the results obtained from the research are analyzed, the benefits and gains are explained
and the paper was concluded with future work.

2. BACKGROUND
2.1. Web Services
In today's IT world, services and applications are now being developed and hosted as services on the Internet.

T
The associated deployment model called on-demand applications or Software as a Service (SaaS) means that

IP
applications no longer require installation or manual upgrade. The Internet server network (World Wide Web -
WWW) has gone through a series of steps to enable the development of optional software from the day it was

CR
developed. Web pages were initially a little more than simple text documents with limited user interaction
capabilities; it only allowed applications based on graphical support and form-based data entry links and full-
page updates. Over time, with numerous interactive Internet technologies, it has become possible to create ever-

US
growing interactive web pages with built-in support for advanced web and graphics. Nowadays, steps are being
taken towards building more complex applications by taking advantage of improved broadband network and
AN
using the new capabilities of Web 3.0 and upcoming Web 4.0 technologies which are expected to arrive after
2020. The ability to download required part of the systems only when needed, provides many benefits over
traditional models. On-demand applications do not need to be installed by users on their local systems and can
M

therefore be upgraded easier. Furthermore, since the same service is widely available to many users, it is much
easier to solve the complications than the traditional model. In addition to this, the service provider can access all
users' behaviors; this makes it easier to design comprehensive testing systems that focus on the most important
ED

aspects of the system. Another feature of the on-demand applications is that they are usually uploaded to the
Internet and function as the web browser's working environment. As a result, applications are often built on
application programming interfaces (APIs) based on web technologies such as HTTP (Hyper Text Transfer
PT

Protocol), REST (Representational State Transfer), SOAP (Simple Object Access


Protocol) and JSON (JavaScript Object Notation), which can be used for other operations. This feature enables
CE

the development of increasingly complex third-party applications, which repeatedly use existing content and
services. These applications, which convert content from various applications into an integrated experience and
are called as mashups can be created by developers who are not directly associated with the original developers
AC

of reuse services [4].

2.2. Web Service Types


2.2.1. SOAP Web Services
SOAP is a protocol that provides a simple and light mechanism to modify structured and written information
between peers in a distributed environment. SOAP does not define the programming model or any application-
specific semantics itself; it provides encryption mechanisms to encode data within a modular packaging model
and modules. SOAP consists of three parts:

3
ACCEPTED MANUSCRIPT

 SOAP envelope: It defines a general frame for expressing the content of a message.
 SOAP encryption rules: They refer to a mechanism that can be used to modify instances of application-
defined data types.
 SOAP RPC display: It is a contract that can be used to represent remote procedure calls and responses.

2.2.2. REST Web Services

T
REpresentational State Transfer (REST) refers to the status transfer performed. The description of the system

IP
through a simple web-based social application is as follows:
1. A user visits the application's home page by typing the address in the browser.

CR
2. The browser sends an HTTP request to the server.
3. The server responds with an HTML document that contains some links and forms.
4. The user writes the information to a form and sends the form.

US
5. The scanner sends another HTTP request to the server.
6. The server responds with the request to the jobs and another page.
AN
With a few exceptions, most websites and web-based applications continue to follow the same model until the
user stops. In this study, RestAPIs are examined.
M

2.3. Literature Research


APIs is a popular subject in the literature and there are some studies on the APIs developed in the literature for a
specific purpose. In [5], it is introduced an API for performance evaluation of modern processors, the study [6]
ED

described an API design for IP network monitoring and [7] has designed a tool and API for work EnMAP data
processing. An API developed by using the Python programming language and used for geographic resource
analysis is described in [8]. Since each new API design has the potential to be considered as a unique product,
PT

there are many patents related to this subject. The [9] discloses the system and method of an access-level API to
manage the heterogeneous components of a storage area network. The design and use of a mobile advanced
CE

telephone API is described in the patent [10]. In the patent of [11], common query runtime system and an API
related to the subject matter are designed and described. In [12], the researchers developed a framework for
meta-model objects and an API in 2003. The patent of [13] designed an API that controls the allocation of
AC

physical memory in a virtual memory system. [14] introduced remote control methods for an API and the
software is included in an API mapping input devices that control movement [15]. An API for sensory events is
included in the patent of [16]. Considering the studies in the literature, it can be seen that APIs can be designed
for many different applications. Accordingly, the successful design and high performance of APIs are becoming
key points for a variety of applications. The performance of APIs can increase when criteria such as
programming methods and programming languages and/or environments are selected appropriately.

4
ACCEPTED MANUSCRIPT

3. MATERIALS AND METHODS


3.1. Tests Applied to Rest APIs
3.1.1. Load Test
Load testing is a type of test that checks how systems function under a heavy number of concurrent virtual users
performing transactions over a certain period of time [17]. In this study, different kinds of load tests are applied
such as k6 tests and Locust tests.

T
IP
3.1.1.1. k6 Test
k6 is an open-source, developer-oriented, specifically designed load test application to measure the performance

CR
of services running behind applications. The k6 tests are developed on the virtual user concept and create a
burden for applications with the desired number of virtual users. The behavior of the application users to be
tested can be encoded with JavaScript script files. During the load test, virtual users send requests to the

US
application. k6 creates the load by executing the application to be tested in parallel with the desired number of
virtual users and records the responses of the tested applications to the requests made by virtual users and can
also save this data to the time-series based database.
AN
In this paper, two different tests were performed by using the k6 load test application. In the first test scenario,
new virtual users whose number increase from 0 to 1000 in 60 seconds send requests to the RestAPI
M

applications. This is a preparation step to ensure that the applications can carry the actual load. After this warm-
up phase, 1000 virtual users continued to send requests to the application for 60 seconds. The responses of the
ED

applications were recorded for the purpose of analysis. Finally, to end the test, the number of virtual users has
been reduced from 1000 to 0 for 10 seconds. The second test was carried out to measure the stability of the
applications. Applications under this test have been tested under lower load. Application loads were created
PT

using 100 virtual users, the applications were tested under this load for 60 seconds and the results recorded. In
order to eliminate the problems that may occur in applications instantly, these tests were run for 5 times and the
results were recorded and examined separately.
CE

3.1.1.2. Locust Test


Locust is an open source load test application that is generally easier to develop than other load test applications.
AC

Locust tests are made with simple Python script files. User behavior is developed by the Python code and
included in the load test by Locust. Thanks to the event-based structure, the target application can be tested by
creating thousands of virtual user loads.

In this paper, using the Locust application, it has been tested that the RestAPI applications can respond to the
maximum number of virtual users in order to maintain the optimum response time. In this context, after the first
load test run with 250 virtual users, the load was run 8 times with 500, 750, 1000, 1250, 1500, 1750 and 2000
virtual users by increasing 250 virtual users in each test and the response times of the application were recorded.

5
ACCEPTED MANUSCRIPT

3.1.2. Stress Test


Stress testing is a testing type that checks the upper limits of a system by testing it under extreme loads. In
addition to load testing; stress testing also examines memory leaks, slowness, security issues and data corruption
[17]. In this study, a stress test is applied to all RestAPI applications.

3.2. Performance Agents

T
The WebAPI performance is a critical issue for sites which service a high volume of requests and is affected by a

IP
variety of factors [18]. Any third-party library used, the hardware that application runs on, like processors or
memory, any I/O operation such as file or network access, can affect the performance of the WebAPI application

CR
positively or negatively. In the development of all prototypes within the scope of this study, we avoided using
any third-party library unless it is absolutely necessary. Also, no additional features were developed that may
impact the performance of the application. Applications are developed and deployed as simple and plain as

US
possible to minimize the effects of any external dependencies. Apart from internal software dependencies,
external software dependencies such as application servers, have a big impact on the performance of the
AN
applications. An application server basically provides a hosting environment for applications to work on. It also
provides additional functionalities, to process or run the applications which are hosted on the servers. The most
well-known web application servers in the world can be listed as IIS (Internet Information Services) [19],
M

Tomcat [20], Glassfish [21] and Weblogic [22]. Especially in recent years, simpler and embedded web servers
have started to gain popularity. For example, Node.js applications have the ability to work with the embedded
web server from the first release, without the need for an additional application server to work on. With the
ED

increasing popularity, similar support has been provided in Java and .net platforms. Nowadays, many languages
and platforms support web applications that can run on their own without the need for an additional application
server, these applications can easily be hosted behind a load balancer.
PT

In this study, prototype RestAPI applications are designed to run self-hosted without requiring any external
CE

application servers, so any performance hit or gain, due to the performance impact of the application server can
be avoided. Another factor affecting the overall performance of applications is the hardware they work on. All
tests carried out within the scope of this study were performed using the same hardware. It was checked that no
AC

additional devices or accessories were connected to the hardware used during the tests. All RestAPI applications
were run on the real servers, applications we not hosted on hardware that runs on any virtualization technology.
Logging components used throughout the tests were hosted on docker containers. These components didn’t have
any dependencies with the actual RestAPI applications and same instances were shared upon all RestAPI
application tests.

In order to run a comparison among the applications developed in the study, metrics such as request response
time, the average response time, median response time and the total number of responses that the applications
can deliver to the requests were recorded. The most important performance metric for the RestAPI applications

6
ACCEPTED MANUSCRIPT

can be named as the request response time. Applications can respond in various response times depending on the
load of incoming requests. During the tests, the applications may respond to some requests slowly and respond to
other requests very quickly. If only the average response time or median response time is examined during the
tests, these fast and slow responses may be missed since they happen less frequently. Therefore, within the scope
of the study, apart from average and mean response times, response times in different percentiles were also
recorded. Response times in percentiles are shown as p80, p95, p98 that means 80%, 95% and 98%. p95 is the
maximum response time to respond to 95% of all incoming requests. If an application responds to 5% of all
incoming 1000 requests (50 requests) in 300 milliseconds and in 50 milliseconds for the rest of requests, the

T
average response time will be 62.5 milliseconds and on average, the application performance may be evaluated

IP
as high. But for the same results, p98 percentages will be observed as 300 milliseconds and the sudden increases
and decreases in the application will be examined without being overlooked.

CR
3.3. Programming Languages and Environments
In this study, various RestAPI prototype applications have been developed by using C#, Java, Go, Python and

US
Node.js programming languages and environments and various comparisons have been carried out on the
performance of these applications. The main features of these programming languages and environments can be
AN
summarized as follows.

The C# programming language has been developed by Microsoft, led by Anders Hejlsberg [23]. It has a very
M

similar syntax to C++. C# is a fully object-oriented language for .net framework platform. Applications
developed with C# language are translated into an intermediate language, Microsoft Intermediate Language
(MSIL). The execution of the application is done by the Common Language Runtime (CLR)
ED

[24]. It reads and interprets the generated MSIL file and translates it into machine language and executes the
application. Windows applications, web applications, applications that find with Windows Azure, plug-ins for
Microsoft Office, and database applications can be developed using C# and .net framework. C# was used in this
PT

paper because of its ―being compiled property‖.


CE

The Java programming language project was initiated in 1991 by James Gosling, Mike Sheridan, and Patrick
Naughton [25]. The Java programming language shares a similar syntax with C and C++. Nevertheless, Java
provides a simpler object model with less low-level capabilities than C and C++. Various types of applications
AC

can be developed with Java programming language such as distributed systems, web applications, web services,
web application programming interfaces and mobile applications. The most important features of Java language
is its being simple, object-oriented, platform-independent, robust, interpreted, multithreaded, distributed and
dynamic [26]. Java applications work in a Java Virtual Machine (JVM), which enables being independent of the
computer architecture [27]. Thanks to JVM, Java applications can run on all platforms where the JVM can run.
Java was used in this paper because of its ―being compiled property‖.

The Go programming language was conceived in September 2007 by Robert Griesiemer, Rob Pike and Ken
Thompson [28]. Go is an open source and completely free programming language. Go is a compiled, statically

7
ACCEPTED MANUSCRIPT

typed programming language. The primary use of the Go language is system programming, but it can also be
used to develop high performance web applications taking advantage of its high performance and its first class
support for concurrency. It is simple to learn the Go language syntax, since it just contains 25 keywords.
Go was used in this paper because of its ―being interpreted property‖. Also, it is seen that there is very small
amount of studies about Go in the literature and this paper will try to contribute Go language literature.

The development of the Python which is a high-level programming language [29] began in 1990 by Guido van
Rossum in Amsterdam. It incorporates a dynamic-type system with automatic memory management and

T
supports object-oriented, functional, and procedural programming paradigms. It has a large and comprehensive

IP
standard library. There is no need to compile applications developed in Python. The source code of a Python
application is processed by the Python interpreter when it starts to run. This is similar to the web interpreters

CR
used in PERL and PHP. Python applications can run on almost all platforms including Unix, Linux, Mac and
Windows. With Python’s simple syntax based on indentations, it is easier to learn to develop with Python then
most of other programming languages. This gives the ability to begin programming sooner with a lower

US
overhead on learning the syntactic details of the programming language. Python can be used to develop many
different types of applications. It can be used in web applications, user interface programming, network
AN
programming, application and database software programming or even system programming. Especially after the
2000s, Python has started to be more popular, being used in scientific purposes like machine learning and
artificial intelligence. Python was used in this paper because of its ―being interpreted‖ and ―being open-source‖
M

properties.

Node.js is an open-source, cross-platform JavaScript environment, developed by Ryan Dahl in 2009 [30].
ED

Node.js enables software developers to create dynamic web content using JavaScript at the server side. This
helps to eliminate the difficulties of using of different programming languages on server side and client side.
High performance web applications and network applications can be developed using Node.js. Node.js has also
PT

extensive standard library to facilitate file operations, network operations (HTTP, DNS, TCP, TLS/SSL and
UDP), cryptography, data flows and other basic functions. Node.js is not a real programming language and it is
CE

an environment having ―JavaScript‖ interpreter. It was used in this paper because it is desired to compare
performance of programming languages and an alternative environment.
AC

4. EXPERIMENTAL STUDY
C# (.net core), Java, Go, Python languages and Node.js environment are used to develop the prototypes of
RestAPIs and various benchmark tests were run to compare the performance of these application in the study.
Different load and stress tests were applied to all RestAPI applications. The same hardware is used to run all the
tests specified in this study. The specifications of the hardware is 2,9 GHz Intel Core i7 processor with 16GB of
2133 MHz LPDDR3 RAM running on MacOS 10.14.4 Mojave operating system. PCI-Express connected SSD
Disk used for the storage for specified hardware.

8
ACCEPTED MANUSCRIPT

First of all, each RestAPI application was tested with three different test sets, including 2 tests using the k6 load
testing tool and one another test using Locust load testing tool. k6 load testing tool provides the virtual user
model which can be used to simulate real users by providing the user behavior in the test code. For the first two
tests, the user behavior is defined as each virtual user will make a new web request every 100 milliseconds. Two
different tests were performed with the k6 load testing tool. During the first test, the number of virtual users was
increased from 0 to 1000 for 60 seconds to allow RestAPI applications start up properly and as the number of
virtual users reached 1000 the application was run under this load for 60 seconds. Then the tests were finalized
by decreasing number of users from 1000 to 0 in 10 seconds. The tests were run for 5 different RestAPI

T
applications and the results were recorded. In the second test using k6 load testing tool, applications are warmed

IP
up until number of virtual users are increased to 100 and as load test had 100 virtual users applications were run
with this load for 60 seconds and the response times of the applications were measured. This test was run 5 times

CR
for each application and the results were recorded. For the third test, Locust load testing tool was used. Locust
also provides virtual user concept like k6 load testing tool. Using this model with Locust, virtual users are
implemented so that each use will create a new web request to RestAPI prototypes during the test period. Tests

US
were run for 60 seconds and application response performance is recorded. Tests were run for 8 times for each
RestAPI applications. First tests were run 250 virtual users and for each consequent test number of users were
AN
increased by 250, reaching up 2000 virtual users in the final tests.

Secondly, a stress test was performed on each RestAPI application. Throughout this test, the k6 load test tool
M

was used to perform the tests. Purpose of the stress test was to study the behaviors of the RestAPI applications
when handling high loads of requests and recovery capabilities after a high load of traffic. The stress tests were
designed to start with a high load of traffic sent to the RestAPI applications, then running with high load for
ED

some time, the load reduced and stopped for a defined period of time. After giving some time to the RestAPI
applications to recover, high load of traffic was again sent to the application to see if the application can provide
a similar performance as if they are fresh processes. Stress tests are key to find the potential memory leaks in the
PT

applications, as the high load tests are applied second and third time, applications cannot perform well in case of
a memory leak but in the context of this study, implementations are kept simple and no third-party libraries are
CE

used so the concentration of the stress test is focused mostly on the recovery capabilities.

The stress tests start with increasing number of virtual users from 0 to 100 in the first 10 seconds. Then
AC

additional 100 virtual uses are added at the end 20 th second. Number of virtual users are increased to 400 at the
end of 30th second. Then additional 200 users are added in the course of next 25 seconds and another 200 virtual
users in the next 25 seconds making total number of users to 750. Reaching to a point with a high traffic load,
the number of virtual users is dropped to 0, to provide timeframe for the applications to recover. After giving no
traffic to the applications for 20-30 seconds, the same pattern of user increase applied and traffic load of 750
users are send to RestAPI applications again and their performance metrics are measured.

9
ACCEPTED MANUSCRIPT

4.1. Applications
4.1.1. RestAPI Application with C# Programming Language
In this study, first RestAPI application was implemented by using C# programming language. After
implementation; load and stress tests were applied. In Figure 1, the results of the k6 load test can be observed.
The number of clients increased linearly and fixed at 1000 units (virtual users) and response time results were
recorded. During the test, as the number of users was increased from 0 to 1000, the average response time was
observed between 0.5 ms and 0.8 ms. When the test was run for 1 minute after the number of virtual users
reached 1000, the average response time is observed between 0.5 ms and 1 ms. As 95% response times are

T
analyzed which can show the performance of the service in more details than the average response time, it has

IP
been observed that the response time stays in the range of 0.75 ms – 2 ms throughout the test, although it has
increased to 4 ms for an instant. In this context, it has been observed that the application can maintain its

CR
performance in a sustainable manner.

US
AN
M
ED

Figure 1: Response Time (k6 Load Test) for C# RestAPI

Figure 2 shows the response time graph for the k6 load test. The average response time, median response time,
PT

p80 percentile response time, p95 percentile response time and p98 percentile response time were recorded and
test was run 5 times under the same conditions.
CE
AC

10
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 2: Response Time (k6 Load Test – 2) for C# RestAPI
Figure 3 shows the total number of requests that can be handled by the application of the Locust load test. In the
test, it has been observed that the .net core application was able to increase the number of successful responses
M

up-until reaching to 1250 virtual users, and after this point, the number of responses did not increase and even
decreased with more virtual users sending requests to the application. It is observed that the application provides
ED

the optimum point with the hardware used in the test with 1250 users, handling a total of 33431 requests.
PT
CE
AC

Figure 3: Number of Requests (Locust Load Test) for C# RestAPI

11
ACCEPTED MANUSCRIPT

Figure 4 shows the graph of response times and total number of requests, processed by the C# RestAPI
application. In the first phase of the test, the RestAPI was able to respond the incoming requests under 18 ms
(95% percentile), processing up to 2550 requests/sec. In the second phase of the test, response time performance
of the application was slightly better at 13 ms (95% percentile) yet total number of requests processed was also
below the first test, 2450 requests/sec. This test shows that C# RestAPI was able to successfully recover from a
high load traffic state and resume processing the incoming requests with almost same level of performance in the
second phase of the test without having any side effects from the first test.

T
IP
CR
US
AN

Figure 4: Stress Test Application for C# RestAPI


4.1.2. Rest PI Application with Java Programming Language
M

In this study, second RestAPI application was implemented by using Java programming language. After
implementation; load and stress tests were applied. In Figure 5, the results of the k6 load test can be observed.
ED

The number of clients increased linearly and fixed in 1000 units (virtual users) and response time results were
recorded. During the test, as the number of users was increased from 0 to 1000, the average response time was
observed between 0.35 ms and 0.6 ms. When the test was run for 1 minute after the number of virtual users
PT

reached 1000, the average response time is observed between 0.45 ms and 0.8 ms. As 95% response times are
analyzed which can show the performance of the service in more details than the average response time, it has
been observed that the response time stays in the range of 0.75 ms – 1.5 ms throughout the test, although it has
CE

increased to 2 ms for an instant. In this context, it has been observed that the application can maintain its
performance in a sustainable manner.
AC

12
ACCEPTED MANUSCRIPT

Figure 5: Response Time (k6 Load Test) for Java RestAPI

Figure 6 shows the response time graph for the k6 load test. The average response time, median response time,
p80 percentile response time, p95 percentile response time and p98 percentile response times were recorded and
test was run 5 times under the same conditions.

T
IP
CR
US
AN

Figure 6: Response Time (k6 Load Test – 2) for Java RestAPI


M

Figure 7 shows the total number of requests that can be handled by the application of the Locust load test. In the
test, it has been observed that the Java application was able to increase the number of successful responses
up-until reaching to 1250 virtual users, and after that point, the number of responses did not increase and even
ED

decreased with more virtual users sending requests to the application. It is observed that the application provides
the optimum point with the hardware used in the test with 1250 users, handling a total of 31538 requests.
PT
CE
AC

13
ACCEPTED MANUSCRIPT

Figure 7: Number of Requests (Locust Load Test) for Java Rest API

Figure 8 is the response time and total number of request graph, processed by the Java RestAPI application. In
the first phase of the test, the Java RestAPI was able to respond the incoming requests under 28 ms (95%
percentile), processing up to 2500 requests/sec. In the second phase of the test, response time performance of the
application was much better at 4 ms (95% percentile) but total number of requests processed was slightly below
the first test, 2400 requests/sec. This test shows that Java RestAPI was able to successfully recover from a high
load traffic state and was able to process better in the second phase of test without having any side effects from

T
the first test.

IP
CR
US
AN

Figure 8: Stress Test Application for Java RestAPI


M

4.1.3. RestAPI Application with Go Programming Language


In this study, third RestAPI application was implemented by using Go programming language. After
implementation; load and stress tests were applied. In Figure 9, the results of the k6 load test can be observed.
ED

The number of clients increased linearly and fixed at 1000 units (virtual users) and response time results were
recorded. During the test, as the number of users were increased from 0 to 1000, the average response time was
observed between 0.3 ms and 0.45 ms. When the test was run for 1 minute after the number of virtual users
PT

reached 1000, the average response time is observed between 0.3 ms and 0.7 ms. As 95% response times are
analyzed which can show the performance of the service in more details than the average response time, it has
CE

been observed that the response time stays in the range of 0.5 ms – 1.5 ms throughout the test, although it has
increased to 2 ms for an instant. In this context, it has been observed that the application can maintain its
performance in a sustainable manner.
AC

14
ACCEPTED MANUSCRIPT

T
IP
Figure 9: Response Time (k6 Load Test) for Go RestAPI

CR
Figure 10 shows the response time graph for the k6 load test. The average response time, median response time,
p80 percentile response time, p95 percentile response time and p98 percentile response time were recorded and
test was run 5 times under the same conditions.

US
AN
M
ED
PT
CE
AC

Figure 10: Response Time (k6 Load Test – 2) for Go RestAPI

Figure 11 shows the total number of requests that can be handled by the application during Locust load test. In
the test, it has been observed that the Go application was able to increase the number of successful responses
up-until reaching to 1250 virtual users, and after this point, the number of responses did not increase and even
decreased with more virtual users sending requests to the application. It is observed that the application provides
the optimum point with the hardware used in the test with 1250 users, handling a total of 31675 requests.

15
ACCEPTED MANUSCRIPT

T
IP
CR
US
Figure 11: Number of Requests (Locust Load Test) for Go RestAPI

Figure 12 shows the graph of response time performance and total number of requests, processed by the Go
AN
RestAPI application. In the first phase of the test, the Go RestAPI was able to respond the incoming requests
under 14 ms (95% percentile), processing up to 2450 requests/sec. In the second phase of the test, response time
performance of the application was slower than the first test having 17 ms (95% percentile), keeping the total
M

number of requests processed same as the first test, around 2450 requests/sec. This test shows that Go RestAPI
was able to successfully recover from a high load traffic state and was able to process with the same performance
in the second phase of test without having any side effects from the first test.
ED
PT
CE
AC

Figure 12: Stress Test Application for Go RestAPI

4.1.4. RestAPI Application with Python Programming Language


In this study, fourth RestAPI application was implemented by using Python programming language. After
implementation; load and stress tests were applied. In Figure 13, the results of the k6 load test can be observed.

16
ACCEPTED MANUSCRIPT

The number of clients increased linearly and fixed at 1000 units (virtual users) and response time results were
recorded. During the test, as the number of users was increased from 0 to 1000, the average response time was
observed between 0.5 ms and 8.5 ms. When the test was run for 1 minute after the number of virtual users
reached 1000, the average response time is observed between 0.1 ms and 8 ms. As 95% response times are
analyzed which can show the performance of the service in more details than the average response time, it has
been observed that the response time stays in the range of 2 ms – 40 ms throughout the test, it has increased to 40
ms for an instant. In this context, it has been observed that the application cannot maintain its performance in a
sustainable manner, the response times and the number of requests responded significantly fluctuate during the

T
test. In this context, it can be said that the application developed with Python is performing worse than other

IP
languages.

CR
US
AN
M

Figure 13: Response Time (k6 Load Test) for Python RestAPI

Figure 14 shows the response time graph for the k6 load test. The average response time, median response time,
ED

p80 percentile response time, p95 percentile response time and p98 percentile response time were recorded and
test was run 5 times under the same conditions.
PT
CE
AC

17
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 14: Response Time (k6 Load Test – 2) for Python RestAPI

Figure 15 shows the total number of requests that can be responded by the application of Locust load test. In the
M

test, it was observed that the Python application increased the number of successful responses up-until reaching
to 1000 virtual users, and after this point, the number of responses did not increase and even decreased with
ED

more virtual users sending requests to the application. It is observed that the application provides the optimum
point with the hardware used in the test with 1000 users, handling a total of 26264 requests.
PT
CE
AC

Figure 15: Number of Requests (Locust Load Test) for Python RestAPI

18
ACCEPTED MANUSCRIPT

Figure 16 shows the graph of response time performance and total number of requests, processed by the Python
RestAPI application. Different from other stress tests that were held in this study, up to 600 virtual users are
created during this stress. In the first phase of the test, the Python RestAPI was able to respond the incoming
requests under 600 ms on average, processing up to 1100 requests/sec. In the second phase of the test, response
time performance of the application was worse than the first test having 800 ms on average, having a higher
processed requests per second, around 1250 requests/sec. Python RestAPI did not run in a stable manner
throughout the test but from recovery capability perspective, it is observed that Python RestAPI was able to
recover successfully from a high load traffic state and was able to process with a similar performance in the

T
second phase of test without having any side effects from the first test.

IP
CR
US
AN

Figure 16: Stress Test Application for Python RestAPI


M

4.1.5. RestAPI Application with Node.js Environment


In this study, fifth RestAPI application was implemented by using Node.js environment. After implementation;
ED

load and stress tests were applied. In Figure 17, the results of the k6 load test can be observed. The number of
clients increased linearly and fixed at 1000 units (virtual users) and response time results were recorded. During
the test, as the number of users was increased from 0 to 1000, the average response time was observed between
PT

0.5 ms and 1.8 ms. When the test was run for 1 minute after the number of virtual users reached 1000, the
average response time is observed between 1 ms and 2 ms. As 95% response times are analyzed which can show
CE

the performance of the service in more details than the average response time, it has been observed that the
response time stays in the range of 2 ms – 8 ms throughout the test, although it has increased to 30 ms for an
instant. In this context, it has been observed that the application can maintain its performance in a sustainable
AC

manner.

19
ACCEPTED MANUSCRIPT

T
IP
Figure 17: Response Time (k6 Load Test) for Node.js RestAPI

CR
Figure 18 shows the response time graph for the k6 load test. The average response time, median response time,
p80 percentile response time, p95 percentile response time and p98 percentile response time were recorded and
test was run for 5 times under the same conditions.

US
AN
M
ED
PT
CE
AC

Figure 18: Response Time (k6 Load Test – 2) for Node.js RestAPI
Figure 19 shows the total number of requests that can be handled by the application of the Locust load test. In
the test, it has been observed that the Node.js application was able to increase the number of successful
responses up-until reaching to 1250 virtual users, and after this point, the number of responses is not
increased and even decreased with more virtual users sending requests to the application. It is observed that the
application provides the optimum point with the hardware used in the test with 1250 users, handling a total of
30665 requests.

20
ACCEPTED MANUSCRIPT

T
IP
CR
US
Figure 19: Number of Requests (Locust Load Test) for Node.js RestAPI

Figure 20 shows the graph of response time performance and total number of requests, processed by the Node.js
RestAPI application. In the first phase of the test, the Node.js RestAPI was able to respond the incoming requests
AN
under 40 ms on average and 200 ms for 95% percentile, processing up to 2700 requests/sec. In the second phase
of the test, response time performance of the application was slightly better than the first test having 30 ms on
average and under 128 ms (95% percentile), yet having a lower processed requests per second, around 2550
M

requests/sec. This test shows that Node.js RestAPI was able to successfully recover from a high load traffic state
and was able to process with the same performance in the second phase of test without having any side effects
ED

from the first test.


PT
CE
AC

Figure 20: Stress Test Application for Node.js RestAPI


4.2. Performance Comparisons
In Figure 21, depending on the virtual user increase of 250 units, the number of requests that can be handled in
unit seconds can be examined for all programming languages used in this study.

21
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 21: Maximum Number of Requests (Locust Load Test)
M

The common pattern observed in all of the platforms is, as the number of virtual users increases, the number of
responses to requests per second increases to a certain point, then remains same at this level, if number of virtual
users are increased to an even higher number, total number of responses does not increase and even falls in some
ED

cases. The point where number of responses stops increases indicates the optimum load that API application can
handle on the hardware it is running on. During the tests, different metrics are observed from different platforms
in the study.
PT

As can be seen in the Figure 21, the Python platform was able to respond to 407 requests per second in the load
CE

test with 1000 users, and although the number of users requesting increased, the number of responses per second
did not increase and remained same on average. From this point of view, Python has shown the worst
performance with a handling multiple user requests since other languages were able to handle to more requests
AC

per second, up to 1250 users. As number of virtual users exceeded 1250, applications reached to optimum limits
and were not able to handle more requests even the number of virtual users increased. In the tests carried out
with 1500, 1750 and 2000 users, the number of responses per second was not higher than the tests performed
with 1250 users. As a result, in this test where the number of users was increased, Python showed
the worst performance by responding 407 requests per second and the other languages performed similar to 550
requests per second.
In Figure 22, the total number of requests can be examined per virtual user increase of 250 units for the
applications designed with different programming languages, which were recorded during one-minute load tests.

22
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 22: Total Number of Requests (Locust Load Test)

In this test, results were obtained as parallel to the results obtained in the previous test. In the tests conducted
M

with a different number of virtual users for one minute, the total number of requests handled can be increased to
a certain number of users based on the platform, and after a certain point, applications reach the limit and remain
stable. Similar to the previous test, the worst performance belongs to Python RestAPI application, which was
ED

able to handle a total of 28844 requests in a test with 1000 users. The best performance was achieved by
responding 33431 request by .net core generated from 1250 virtual users. However, the applications made with
Node.js, Java and Go languages and environments performed very close to these values.
PT

In Figure 23, for applications designed with different programming languages and environments, the average
CE

response times can be examined per virtual user increase of 250 units.
AC

23
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 23: Average Response Time (Locust Load Test)

When looked at the average response time with 250 users, all platforms run with very high performance by
M

handling requests under 5 ms. Then, starting with the test with 750 users, Python RestAPI application starts to
fall behind compared to other languages. During the load tests with 750 virtual users and more, the Python
RestAPI application performed 50% to 400% slower than the other platforms on an average response time. Apart
ED

from this, while other languages perform with very close average response times, .net core platform provided the
best average response time performance in the study. Average response time metric is popularly used to monitor
PT

web systems performances in general. However, when measuring web application performances, beside average
response time metric, percentile response times like p75, p80, p95 and p99 should be examined to better
understand about the performance of the applications. Using these values allows us to see the outliers that the
CE

average value has overlooked. In particular, web applications and web programming interfaces should be used to
measure their performance with average response times as well as to measure their percentage performance. In
the following tests, the performances of the applications are compared in percentages.
AC

In Figure 24, for applications designed with different programming languages and environments, the p80
percentile response times can be examined per virtual user increase of 250 units.

24
ACCEPTED MANUSCRIPT

T
IP
CR
US
AN
Figure 24: Response Time – p80 (Locust Load Test)

In Figure 25, for applications designed with different programming languages and environments, the p95
M

percentile response times can be examined per virtual user increase of 250 units.
ED
PT
CE
AC

Figure 25: Response Time – p95 (Locust Load Test)

25
ACCEPTED MANUSCRIPT

In Figure 26, for applications designed with different programming languages and environments, the p99
percentile response times can be examined per virtual user increase of 250 units.

T
IP
CR
US
AN
M

Figure 26: Response Time – p99 (Locust Load Test)

5. CONCLUSIONS
ED

In this study, prototype RestAPI applications have been developed by using C#, Java, Go, Python and Node.js
PT

and the performances of the applications under different loads are compared. In order to make an appropriate
comparison, the applications are designed in simplest form possible, returning simple responses without doing
any additional task or needing for an additional third-party library or component. The developed applications
CE

were tested with two different load test applications. As a result of the first load test set carried out with the k6
application, it was observed that the RestAPI application developed with Go was able to handle the requests in
the most stable way, applications developed with C# and Java languages performed well with very close
AC

performance to Go. While the RestAPI with Node.js also handled requests in stable manner, the average
response times were slightly slower than other applications. The RestAPI developed with Python had slowest
response times and handled least amount of requests compared with other applications in the study. It has been
also observed that the application handles the requests with unstable manner, the average response time
fluctuated a lot during the test.

In the second test with k6, it was observed that the best average response time could be obtained from the
application developed with Go. The applications developed with C# and Java languages displayed the second
best performance and they very are close to each other. The Node.js application has the third best average

26
ACCEPTED MANUSCRIPT

response time, while the Python application has been observed to have the worst average response time,
compared with the other applications.

In the third tests which was conducted with the Locust, it was observed that all RestAPI applications performed
well with high performance during the tests with upto 750 virtual users, both the average response times and the
total number of requests handled were very close to each other. It was observed that the application developed
with Python started to degrade in performance with more than 750 virtual users in both average response time
and also in number of total handled requests. Applications developed with C#, Java, Go and Node.js have given

T
optimum performances in 1250 virtual users.

IP
The applications developed with Java and C# have been found to perform close to each other in all tests, and

CR
work stable under low and high load. While the applications developed with Node.js and Python showed lower
performance under load, it was observed that applications performed with higher results under low load. When
the results are examined, it can be seen that quite consistent outputs are obtained with the design principles of

US
languages. Python and Node.js are not compiled languages and they and are included in the interpreted
languages category. This can be considered as one of the important reasons for the lower performance of the
AN
applications under load. C# and Java languages are compiled into intermediate languages and translated into
machine language with instant compilation methods. In this context, it is expected that they will perform better
especially compared to Python. Applications developed with Go are converted directly into binary machine code
M

by the Go compiler. This makes it possible for applications developed with Go to give high performance even
under high loads of traffic. This was also observed in the tests.
ED

As a conclusion, in this study, RestAPI applications developed with different programming languages and
environments are compared with each other regardless of other factors. Factors such as application architecture,
application propagation strategies, web application servers that can affect performance are also available. In
PT

future studies, it is planned to examine the effects of other factors on the performance of applications by adding
other factors one by one to the applications.
CE

REFERENCES
[1] P. Höfner, F. Lautenbacher, Algebraic structure of web services, Electronic Notes in Theoretical Computer
AC

Science 200 (2008) 171-187.

[2] M. Fahat, N. Moalla, Y. Ourzout, Dynamic execution of a business process via web service selection and
orchestration Procedia Computer Science 51 (2015) 1655-1664.

[3] A. De Renzis, M. Garriga, A. Flores, A. Cechich Case-based reasoning for web service discovery and
selection Electronic Notes in Theoretical Computer Science 321 (2016) 89-112.

[4] T. Mikkonen, A. Salminen, Implementing mobile mashware architecture: downloadable components as on-
demand services Procedia Computer Science 10 (2012) 553-560.

[5] S. Browne, J. Dongarra, N. Garner, G. Ho, P. Mucci, A Portable Programming Interface for Performance
Evaluation on Modern Processors The International Journal of High Performance Computing
Applications, 14 (2000) 189-204.

27
ACCEPTED MANUSCRIPT

[6] M. Polychronakis, E.P. Markatos, K.G. Anagnostakis, Oslebo, Design of an application programming
interface for IP network monitoring IEEE/IFIP Network Operations and Management Symposium, 2004.

[7] S. Van der Linden, A. Rabe, M. Held, B. Jakimow, P. J. Leitão, A. Okujeni, M. Schwieder, S. Suess, P.
Hostert, The EnMAP-Box—A Toolbox and Application Programming Interface for EnMAP Data
Processing Remote Sensing, 7 (2015) 11249-11266.

[8] P. Zambelli, S. Gebbert, M. Ciolli, Pygrass: An Object Oriented Python Application Programming Interface
(API) for Geographic Resources Analysis Support System (GRASS) Geographic Information System
(GIS) ISPRS International Journal of Geo-Information, 2 (2013) 201-219.

T
[9] J. F. Bowen, A. R. Mathkar, R. Mathur, S. M. A. Syed, T. W. Weimer, J. E. Bennett, C. W. Braganza, T.
Dwivedi, US7401338B1 numbered patent, 2002.

IP
[10] P. J. Hartmaier, W. E. Gossman, US5978672A numbered patent, 1996.

CR
[11] A. A. Desai, M. W. Fussell, A. E. Kimball, M. L. Brundage, S. Dubinets, T. F. Pfleiger, US7383255B2
numbered patent, 2003.

[12] A. Bloesch, R. Rajagopal, US7293254B2 numbered patent, 2003.

US
[13] C. G. Eisler, G. E. Engstrom, US6128713A numbered patent, 1997.

[14] D. J. Strom, O. Zeliger, US7010796B1 numbered patent, 2001.


AN
[15] C. S. Evans, M. J. Andrews, O. K. Sharma, J. E. Veres, J. M. Thornton, US7116310B1 numbered patent,
1999.

[16] M. A. Boillot, US8312479B2 numbered patent, 2006.


M

[17]N. Cohen, ―Performance Testing vs. Load Testing vs. Stress Testing‖,
https://www.blazemeter.com/blog/performance-testing-vs-load-testing-vs-stress-testing, Retrieved, April
2019
ED

[18] A. Iyengar, E. MacNair, T. Nguyen, An analysis of Web server performance, IEEE Global
Telecommunications Conference (GLOBECOM) 1997.
PT

[19] M. Volodarsky, O. Londer, B. Hill, B. Cheah, S. Schofield, C. A. Mares, K. Meyer, Internet Information
Service Resource Kit, Microsoft, 2008

[20] J. Brittain, I. F. Darwin, Tomcat: The Definitive Guide: The Definitive Guide, O’Reilly, 2007.
CE

[21] A. Goncalves, Beginning Java EE 6 Platform with GlassFish 3: From Novice to Professional, Apress, 2009.

[22] M. Girdley, S. L. Emerson, R. Woollen, J2EE Applications and BEA WebLogic Servers, Prentice Hall,
AC

2001.

[23] A. Hejlsberg, S. Wiltamuth, P. Golde, The C# Programming Language, Addison-Wesley Professional,


2006.

[24]M. Wenzel, ―Common Language Runtime (CLR) overview‖, https://docs.microsoft.com/en-


us/dotnet/standard/clr, Retrieved, March 2019

[25] Anonymous, ―History of Java‖, https://www.javatpoint.com/history-of-java, Retrieved, March 2019

[26] Anonymous, ―Feature of Java‖, https://www.javatpoint.com/internal-details-of-jvm, Retrieved, March 2019

28
ACCEPTED MANUSCRIPT

[27] Anonymous, ―JVM (Java Virtual Machine) Architecture‖, https://www.javatpoint.com/internal-details-of-


jvm, Retrieved, March 2019

[28] A. A. A. Donovan, B. W. Kernighan, The Go Programming Language, Addison-Wesley Professional,


2016.

[29] D. Karssenberg, K. D. Jong, J. Van Der Kwast, Modelling landscape dynamics with Python, International
Journal of Geographical Information Science, 21 (2007) 483–495

[30] S. Pasquali, K. Faaborg, Mastering Node.js, Packt, 2017.

T
IP
CR
US
AN
M
ED
PT
CE
AC

29

Das könnte Ihnen auch gefallen