Sie sind auf Seite 1von 25

Vol 64, No.

6;Jun 2014

CaLLF: A load balancing framework for an SMS–based


service invocation environment
Mandla T. Nene, Edgar Jembere, Matthew O. Adigun, and John B. Oladosu*

Centre for mobile e-Services, Department of Computer Science, University of Zululand,


Private Bag X1001, KwaDlangezwa 3886, South Africa

email: {mandlatnene, ejembere, profmatthewo, johnoladosu}@gmail.com

* Corresponding Author (johnoladosu@gmail.com)

ABSTRACT
Intense increase in the usage of short message service (SMS) technology in recent years
has led service providers to seek for solutions that enable users of mobile devices to
access services through SMS. This has resulted in the proposal of solutions towards
SMS-based service invocation in service oriented environments. However, the dynamic
nature of service-oriented environments coupled with sudden load peaks generated by
service requests, pose performance challenges to infrastructures for supporting SMS-
based service invocation. To address this problem, we proposed Client-aware Least
Loaded Framework (CaLLF) which is a client-aware load balancing approach for SMS
invocation of services environment. The proposed CaLLF builds upon existing load
balancing techniques to handle unexpected load traffic and request prioritization
techniques to handle service client requests according to their class of importance. In
addition, we observed that service-oriented environments deliver services to different
service clients (which are either premium or regular clients) requiring different service
qualities which enforces guarantees. We designed CaLLF to take care of this
requirement. The evaluation of CaLLF for scalability showed it can cope with traffic
while providing better performance compare to Round Robin (RR) scheme; although
with a trade-off of higher computational power requirement. CaLLF achieved better
result with cost computationally resources. We also evaluated CaLLF for utility and we
observed that CaLLF can provide better utility for premium clients compared to non-
client-aware Least Loaded (LL) load balancing approach but this was achieved with
trade-off on regular clients’ utility.

Keywords— Load balancing; request prioritization; service invocation; service oriented


computing; short message service.

1. Introduction
Short message services (SMS) has been established as a de facto standard for sending
and receiving text messages on mobile phones(Grajales III et al. , 2014, Jones and
Graham, 2013, Jun, 2013, Sherif and Seo, 2013, Weintraub et al. , 2013). Its popularity
has led to research interest as well as its use by service providers as an alternative means
of technology to render services which allows service consumer (mobile user) to access
428 Jokull Journal
Vol 64, No. 6;Jun 2014

services from a service provider by requesting and retrieving contents(Lin et al. , 2010,
Risi et al. , 2013, Risi and Teófilo, 2009, Saxena and Chaudhari, 2013, Teófilo et al. ,
2013, Zerfos et al. , 2006). This is known as SMS-based service invocation in service-
oriented computing (SOC) environments. The service-oriented environment makes
SMS-based service invocation feasible by enabling mobile users to access the wealth of
applications which were hitherto only accessible through personal computers. However,
SOC environments are by nature dynamic and composed of autonomous entities, which
makes them unpredictable and difficult to manage. In such environments, it is hard to
predict how the entities involved would behave at a particular time(Andrikopoulos et al.
, 2012, Channabasavaiah et al. , 2003, Elfatatry and Layzell, 2004, Erol et al. , 2014,
Papazoglou, 2008, Sprott and Wilkes, 2004). One of the entities in such environments is
service consumers, which are known to behave unpredictably. This implies that service
providers may receive very few requests from service consumers for a given service at
one time and subsequently receive heavy number of requests. This causes overloading
of service provider’s infrastructure which then poses performance challenges.
Load balancing in literature is one of the techniques for addressing the challenge of
infrastructure overloading in a dynamic environment such as SMS-based service
invocation(Bourke, 2001, Cardellini et al. , 1999a, Kopparapu, 2002, Rao et al. , 2003,
Song et al. , 2014). Load balancing is defined as a process that evenly distributes traffic
amongst computers; thus solving overloading or congestion “so that no single”
computer “is overwhelmed”(Bourke, 2001). Load balancing provides different
techniques/schemes which are traditionally implemented in a mechanism called load
balancer that acts as front-end of the service providers’ servers as illustrated in Figure 1
(a). The load balancer is responsible for load (requests) scheduling. The load balancer in
this work is the SMS broker which acts as front end for the service provider’s servers
that provides the content and services as shown in the architecture by Brown et al.
(2007). It translates SMS requests sent by service consumers to HTTP requests and
directs the requests to a service provider server using some load balancing techniques as
shown in Figure 1 (b).
Queuing approach which implements store and forward mechanism has been
proposed (Ramana and Ghatage, 2005). The forwarding mechanism is a load distributor
that implements simple load balancing schemes such as the round robin (RR) scheme
(Hahne, 1991, Hahne and Gallager, 1986). The RR distributes load around service
provider servers iteratively. We observed that using RR which does not consider any
system state information may lead to poor load distribution decisions, thus, affecting the
system performance. Based on such flaw, the RR is not an appropriate load balancing
technique for such dynamic environments as SMS-based service invocation.
In order to have load balancing that is aware of importance of client’s request, we
implemented request prioritization techniques. This is because service oriented
environment deals with service clients’ needs which may differ in levels of service
quality. We focus on the design and evaluation of the proposed Client–aware Least
Loaded Framework (CaLLF). The proposed CaLLF was evaluated in an SMS-based
service invocation environment.

429 Jokull Journal


Vol 64, No. 6;Jun 2014

(a) Conventional load balancing architecture (Brown, Shipman, 2007)

(b) Adopted approach to SMS based service invocation load balancing architecture (Brown, Shipman,
2007)

Figure 1: Load balancing architecture

The rest of the paper is organized as follows: Section 2 discusses a short review of
related work. Section 3 is the research methodology which includes the design criteria
of CaLLF that assisted to achieve our goal discussed in Section 3.1 and Section 3.2
which presents CaLLF and its components. Section 4 discusses performance evaluation
of the CaLLF. We conclude the paper with the discussion of the implications of our
findings and our future work in Section 5.

2. Related work
A lot of work has been done in distributed environment related to load balancing
(Brown, Shipman, 2007, Lin, Silva, 2010, Risi, da S, 2013, Risi and Teófilo, 2009,
Saxena and Chaudhari, 2013, Teófilo, Cavalcanti, 2013). The major goal is to improve
performance of such systems. In order to achieve this goal, various load balancing
430 Jokull Journal
Vol 64, No. 6;Jun 2014

schemes have been introduced at different levels. Currently, SMS-based service


invocation implements queuing mechanism that distribute load using simple load
balancing scheme such as round robin scheme proposed by (Ramana and Ghatage,
2005). The RR scheme assigns request load on a rotational basis between servers.
However, the RR scheme makes assumption when distributing load – it take guesses
when distributing load because it has less or no information about the system status. The
other flaw is that RR scheme can only serve homogeneous environment while most
distributed system today are heterogeneous. This led to the introduction of weighted
Round-Robin algorithm (Sang and Yang, 2003, Sonntag and Reinig, 2008) as an
advanced version of the round-robin that eliminates the deficiencies of the round robin
algorithm. In weighted round-robin, one can assign a weight to each server in the group
so that if one server is capable of handling twice as much load as the other, the powerful
server gets a weight of two (Chhabra and Singh, 2006, Shirazi et al. , 1995, Wenzheng
and Hongyan, 2010) . The weighted Round-Robin algorithm takes care of the capacity
of the servers in the group. This means that it works well for heterogeneous servers.
However, it does not consider the advanced load balancing requirements such as
processing times for each individual request. It assumes that the processing times for
each individual request are the same, which is particularly not true in environments
where there are heterogeneous requests.
Due to advancements in distributed systems in recent years, which have demanded
better solutions, optimization solutions such as Simulated Annealing (SA) load
balancing has emerged (Boone et al. , 2010, Chhabra and Singh, 2006,
Christodoulopoulos et al. , 2007). The goal of SA algorithm is to find an acceptably
good solution in a fixed amount of time, rather than the best possible solution to deal
with certain situation faced at a particular time. However, optimization algorithms use
heuristic approach, thus requiring more computational power which renders them
unsuitable for distributed system such as SMS-based service invocation that usually
belong to one service provider who has limited resources. Besides, optimization
solutions, machine learning (ML) approaches were also for load balancing (Revar et al. ,
2010). A branch of artificial intelligence, ML, is a scientific discipline concerned with
the design and development of algorithms that allow computers to evolve behaviours
based on empirical data. Various types of ML algorithms have been used for load
balancing. These algorithms are Genetic Algorithm (GA) (Babu and Amudha, 2014,
Fco, 2007, Gondhi and Pant, 2009, Nikravan and Kashani, 2007, Petrovic and Fayad,
2005, Piroozfard et al. , 2014), Bayesian networks (Guo, 2003, Romtveit, 2010) and
Reinforcement learning (Jędrzejowicz and Ratajczak-Ropel, 2014, Parent et al. , 2002,
Wang et al. , 2009). Most of the load balancing research that implement such algorithms
focused on static scenarios; fielding empirical data beforehand because of the high
computing time involved. These algorithms usually obtain optimal load distribution
solution for particular situation but intensive computation is usually involved.
A successful and accurate load balancing scheme requires some sensitivity of the
server load in order to adapt the load balancing weight to the current load. This can be
done by monitoring server behaviour using schemes such as load sharing and round-
trip. The load sharing scheme (Al-Raqabani et al. , 2005, Cheung and Jacobsen, 2006,
Cortes et al. , 1999) balances the load between servers by checking which ones are
heavily or lightly loaded using load metrics being monitored. Requests are moved from
the heavily loaded server to the lightly loaded server (offloading) .This is a well-known
approach but it incurs some delay when migrating load. The round-trip scheme
431 Jokull Journal
Vol 64, No. 6;Jun 2014

(Cardellini et al. , 2002, Cardellini et al. , 1999b, Yan et al. , 2008) assigns requests to
the server that is responding the fastest based on monitored response times of servers.
However, this scheme provides best effort service. The round-trip scheme is derived
from least loaded approach that distributes requests to a group of servers based on
which server currently has lowest load index (Arapé et al. , 2003, Balasubramanian et
al. , 2004, Cardellini, Casalicchio, 2002, Jongtaveesataporn and Takada, 2010). Least
loaded scheme achieves this through monitoring some threshold value or load index
metric. The least loaded scheme has shown its robustness and flexibility by being
applicable in environments such as networking for load balancing in routing (Karasan
and Ayanoglu, 1998, Shen et al. , 2000). Moreover, this scheme can be combined with
other solutions to achieve a good load balancing solution (Fco, 2007, Othman et al. ,
2003). However, the least loaded scheme can have possible delay depending on the
monitoring approach chosen to gather system current information.
In a service oriented environment, service guarantees are enforced because service
providers deal with different type of clients that need different service time; such as
premium and regular clients. The requests from premium clients are given priority over
regular clients. In this case, load balancing approaches are made to be aware of different
classes of client requests when balancing load. There are different requests prioritization
techniques that can be employed with load balancing approaches. Priority-based load
balancing (Goyal et al. , 2011, Grosu et al. , 2002) consists of mechanism that tags
premium requests as high priority and regular as low priority. Requests from regular
clients are only handled when there are no premium requests queued or when a certain
time has lapsed. However this approach can lead to starvation of regular requests.
Another approach to load balancing is the admission control approach (Bonald and
Roberts, 2001, Cherkasova and Phaal, 2002, Muppala and Zhou, 2011). The assumption
is that premium and regular client’s requests have a unique identifier. The load balancer
responsible for load distribution uses this approach to selectively drop some requests
from regular clients to reduce the server load in order to guarantee service level
agreements (SLAs) for premium clients. However, this approach is susceptible to
sudden load change hence unsuitable for dynamic environments
From the foregoing; it is clear that distributed computing environment such as SMS-
based service invocation must support dynamic and adaptive load balancing in order to
handle dynamic request loads. Moreover, the load balancing approach used in such
environments should be combined with prioritization so that it can serve clients requests
according to their class of importance. Therefore in this work, we proposed CaLLF in
order to achieve this objective.

3. Research Methodology
In this section, we present the research methodology which first presents the
theoretical framework that guided the design of CaLLF. Thereafter the design
methodology of CaLLF is discussed.
3.1 Theoretical Framework of CaLLF
1) CaLLF Design Criteria
Literature suggests that load balancing techniques can be used to alleviate
performance challenges faced by infrastructure in distributed environment such as SMS
based service invocation. Moreover, prioritization techniques can be used to
differentiate client requests which need different service qualities. The CaLLF model is

432 Jokull Journal


Vol 64, No. 6;Jun 2014

specifically designed to cope with sudden load peaks in a scalable manner while
providing adequate performance for client requests. From our review of literature, we
have identified the design criteria to take into consideration in designing CaLLF for
SMS invocation of service environment. These design criteria include:

i) Adaptive load handling scheme: CaLLF is designed to function service oriented


environments which are dynamic environments. This implies that such environments
are capable of manifesting unpredictable behaviour. During SMS invocation of services
in service oriented environment, an abnormal behaviour from service clients can lead to
unexpected traffic load in the service provider’s servers. In order to address this issue,
the environment needs an adaptive load handling scheme which will adapt to changes in
the environment and make decision based on the current status of the system being
monitored. This load handling scheme should be able to make load balancing decisions
based on particular situation occurring in the system which is reflected through
monitored system status information.

ii) Provision of Load Monitoring Assistance: The adaptive load handling scheme must
be complemented in terms of making fair load distribution decision by monitoring the
system’s current state and reporting the load situation based on suitable system load
metrics being monitored (Grosu, Chronopoulos, 2002).This load monitoring mechanism
should supply information about system status before an adaptive load handling scheme
makes load balancing decision.

iii) Prioritization based Client Classification: Enforcement of service guarantees is


mandatory in service oriented environment in order to provide services to various
categories of client. This calls for mechanism that will be responsible for differentiating
client requests according to their category. These clients are either premium clients,
which are served with guarantees, or regular clients, which are served with best effort
service. Against this background, it is imperative to classify client requests by making
the implemented load balancing approach to be aware of clients’ importance; such as
premium requests requiring higher service quality than regular requests. This is
achieved by prioritization mechanism that will give preferences to premium clients.

2) Adaptive Load handling mechanism


Based on the need for an adaptive load handling scheme requirement mentioned in 1 (i)
in this section, the least loaded scheme was chosen (Balasubramanian, Schmidt, 2004,
Cardellini, Casalicchio, 2002, Jongtaveesataporn and Takada, 2010). This scheme was
chosen because of its dynamic and adaptive performance in the event of stress load
(Balasubramanian, Schmidt, 2004). Moreover, Jongtaveesataporn and Takada (2010)
showed that, adaptive load handling scheme outperforms the Minimum, Threshold,
Random, and Round Robin approaches in service oriented environments; where
enterprise service bus was tested for the best routing scheme. The least loaded load
balancing scheme can be adapted to suit any load balancing environment owing to its
flexibility (Karasan and Ayanoglu, 1998, Shen, Bose, 2000).The least loaded scheme
has requests transfer policy which determines whether a particular service provider’s
server is suitable for receiving requests. This least loaded scheme transfer policy allows
a server or service endpoint to continue receiving requests if the server currently has
least load than the other servers based on monitored load index metric. The load index

433 Jokull Journal


Vol 64, No. 6;Jun 2014

metric is used to identify which server is the most appropriate to receive requests at a
period in time. In order for that to happen, the least loaded scheme transfer policy is
supported by load information policy, which disseminates information about the status
of each service provider’s server. The load Information policy is a load monitoring
mechanism that supports least loaded scheme to make appropriate load distribution
decision. Sub-section 3 in this section, following hereunder, discusses how load
monitoring mechanism works for the purpose of supporting adaptive load handling
mechanism.

3) Load Monitoring mechanism


Literature suggests that a promising load balancing scheme is required in order to
understand the system’s current status before making load balancing decisions (Chhabra
and Singh, 2006, Goyal, Patel, 2011). This is enabled by having a monitoring
mechanism that collects information about system status using a periodic or an on-
demand approach. In this work, an on-demand approach is preferred over periodic
approach because it ensures that collected information is never outdated (Arapé,
Colmenares, 2003). The on-demand load monitoring mechanism collects system status
information only when it is needed for load balancing. That is, the information is
collected dynamically when there is load to be distributed, otherwise no information is
collected. The load monitoring mechanism usually uses the following metrics to support
load balancing decision making: CPU utilization, memory utilization, input/output (IO)
utilization and Bandwidth occupancy (Qin et al. , 2003, Wang, Chen, 2009). According
to Wang, Chen (2009), these load metrics can be used individually or collectively
depending on the type of load the application or service consumer requests exert on the
system. In our work, we selected CPU utilization as our monitored load metric based on
assumption that client requests are more demanding on computational resources.
Moreover, it is a well-known load index with minimum overhead (Kunz, 1991). The
other reason is that when monitoring load for any server machine, the calculation used
to get load metric should not be computationally expensive (Zhang and Harrison, 2007).
It has been proven by Kunz (1991) that CPU load index incurs a minimum overhead.
The CPU load index metric is given by dividing CPU utilization (CPUu), CPU capacity
(CPUc) and it is supported by CPU idle time metric. The CPU load index metric is used
for getting the status of service provider’s servers. This is collected by load monitoring
mechanism and published to the adaptive load handling mechanism containing least
loaded scheme so that load is distributed.

4) Client Classification mechanism


In consideration of the nature of service oriented market to which utilisation of our
proposed scheme is targeted, client classification is used as design criteria. This led to
designing load balancing approach that can classify client requests before load
balancing process takes place. We explored various clients prioritization approaches and
chose the over dimensioning of resources approach as the preferred technique for
dealing with premium clients which require service quality. This is based on the fact
that over dimensioning of resources provides dedicated servers for premium clients
request in order to meet their SLAs (Yigitbasi and Epema, 2010).
This approach is preferred over others because it enables a high quality of service;
which is needed to keep premium client satisfied. In our work, premium requests are
routed to dedicated server resources only, unless regular client servers are underutilized.
We considered alternative prioritization approach as required by over dimensioning of
434 Jokull Journal
Vol 64, No. 6;Jun 2014

resources techniques. The preferred prioritization approach, which determines the


requests that should be served according to their class of importance, is proposed in
literature (Bic and Shaw, 2003, Teixeira et al. , 2004, Zhang and Harrison, 2007).
Applying this priority based approach to our load balancing model, the client’s requests
arrival rate is given by
𝑝
𝑅𝑖 = 𝑅𝑖 + 𝑅𝑖𝑟 1

Where Rip denotes arrival rate of premium requests and Rir denotes arrival rate of
regular requests. Ri is used on SMS broker for converting the SMS requests to HTTP
requests and load scheduling of these requests to the service provider’s servers. As
stated in literature, premium requests have higher priority over regular requests .To
differentiate between these two classes, we used numeric values to describe priority and
it is derived from each request (premium or regular) by the function:

𝑃 = 𝑃𝑟𝑖𝑜𝑟𝑖𝑡𝑦 𝑅𝑖 (2)

Where P is a numeric value that describes the level of priority of certain clients requests.
In this work, we described a request with a higher numeric value P as of the type high
priority and with lower numeric value P as of the type lower priority; which maps
premium and regular clients respectively. An example to demonstrates how the
prioritization mechanism functions is as follows. Let us say we have request arrival rate
Ri with following requests:

𝑅𝑝 = 𝑅1𝑝 , 𝑅2𝑝 , 𝑅3𝑝 3


and
𝑅𝑟 = 𝑅1𝑟 , 𝑅2𝑟 , 𝑅3𝑟 (4)

which consists of both premium and regular requests Rp and Rr respectively. The
premium requests Rp are sent to their dedicated servers with the assistance of over
dimensioning of resource approach and are handled first until they finish (Teixeira,
Santana, 2004). Premium requests Rp can utilize regular client servers if they are
underutilized. Regular requests Rr are handled when there is no more premium requests
on queue.
We used numeric value to describe priority. We assumed the premium and regular
requests priority are already specified by the service consumers. From the presented
mechanisms in equations (3) and (4), there is need to find a way to connect the requests
so that they form part of SMS-based service invocation system. The methodology of
connecting requests is discussed in our framework design in Section 3.2.
3.2 Design of the Framework
The design criteria outlined in Section 3.1, and how they are achieved, led to what
components should CaLLF entail in order to address the issues of SMS based service
invocation environments. CaLLF consists of three main components derived from the
design criteria. The components are client classifier, load balancing decision maker and
load monitor as shown in Figure 2. We employed event based communication style
(Mühl et al. , 2006) in designing CaLLF. This contributed to the scalability of CaLLF;
because it allows independencies between components; the components communicate
435 Jokull Journal
Vol 64, No. 6;Jun 2014

with one another with little or no knowledge of one another. Event based
communication style brings in loose coupling which facilitate scalability in such a way
that a system can grow without any effect on other components. Thus, CaLLF employed
producer/consumer paradigm which means load balancing decision maker component
acts as consumer while client classifier and load monitor component act as producer.
CaLLF was incorporated in load dispatching mechanism (SMS broker) in an SMS
based service invocation environment setting. The interacting components are discussed
in sub-sections 1 to 3 of this section.
1) Client Classifier
The client classifier component is responsible for receiving requests Ri which may
consist of two types of requests: premium (Rip) and regular requests (Rir). The premium
requests are of the type high priority so they need guarantees while regular requests are
of type low priority and they are served with best effort service. The client classifier
takes these requests and categorises them using the priority function which is
coordinated by prioritization mechanism. After the classification, this component sends
or relays the requests according to their class of importance to the load balancing
decision maker components which supports prioritization mechanism in client classifier
component. This component determines which class of requests should be served first
by the load balancing decision maker components so that distribution of these requests
should take place. In this case, preference is given to premium requests. This means that
client classifier is a producer and load balancing decision maker is a consumer.

Load Balancing
Categorized
Incoming requests Clients Classifier requests Decision Pushes status info Load Monitor
Maker

Pulling status
info

Premuim clients
requests
Servers for premium clients
Regular client requests

Normal servers

Service
provider
resources
Figure 2: Conceptual Design of CaLLF

2) Load balancing Decision Maker Component


The load balancing decision maker component is the component for managing the
distribution of requests which are relayed by client classifier. The actual destination of a
436 Jokull Journal
Vol 64, No. 6;Jun 2014

request is decided using a load balancing strategy. This component receives load
information on-demand from load monitor component, and uses the obtained data to
check which service application has the least amount of load at that time.
The load balancing strategy implemented by load balancing decision making
component is called least loaded scheme (Arapé, Colmenares, 2003) which works as
follows: The least loaded scheme distributes requests to server with the lowest load
index and in this case we used CPU utilization as a load index. The CPU load index
information of service provider servers is pulled by load monitor on-demand and
published in load balancing decision maker component for load balancing purposes.

3) Load Monitor Component


The Load monitor is responsible for connecting to service provider’s servers to collect
load information. It acts as a producer for the load balancing decision maker component
by publishing monitored status information which is based on CPU load index metric of
service provider’s servers. This status information is used by load balancing decision
component to decide which service provider’s server to send the load to. The load
monitor component works as follows: The on-demand approach is used to achieve
gathering of system status information. The status information of each server is fetched
and published to load balancing decision making component whenever it is needed so
that load balancing decision can be made. For the load monitor component to gather
CPU load index for each of the servers, it uses an Application Programming Interface
(API) which is called Hyper Sigar (http://support.hyperic.com/display/SIGAR/Home).

4. Evaluation Results
We evaluated CaLLF to prove that it achieved the objectives of its design. This section
starts with discussion of testbed specification of CaLLF in Section 4.1. The evaluation
of CaLLF had two aspects: The first aspects, presented in Section 4.2, evaluates the
scalability (i.e. load testing) of the Least Loaded scheme which is employed by CaLLF
against RR on the testbed setup using some simulated requests. The second aspect,
presented in Section 4.3, evaluates the utility of CaLLF against least loaded algorithm
(LLA) that is non-client aware.
4.1 Testbed Specification
In developing our testbed, the following assumptions were made:
1. We assumed the web services exposed by the service provider servers are purely
computational services. As a consequence, their execution time is directly proportional
to the amount of service requests sent by service consumers.
2. We assumed that the network delay is constant throughout the experimentation.

4.2 Scalability Evaluation of CaLLF


Scalability is the ability of a system to handle growing amounts of work in a graceful
manner or its ability to be enlarged to accommodate that growth. The evaluation
focused on the scalability of CaLLF with increases in the number of request being
processed per unit time while observing system performance. This was investigated in
three dimensions: (i) scalability of CPU usage, (ii) throughput of request, and (iii)
response time with increases in the number of requests being processed. The scalability
of the CaLLF was benchmarked on the queuing approach with RR for load distribution.
437 Jokull Journal
Vol 64, No. 6;Jun 2014

The reason for comparing CaLLF with RR is based on the fact that RR is used as de
facto standard in SMS based service invocation environment (Balasubramanian,
Schmidt, 2004, Jongtaveesataporn and Takada, 2010, Othman, Balasubramanian, 2003).
The testbed setup, environment and design of the scalability evaluation are discussed in
the following subsections.

1) Testbed Setup and Environment for Scalability Evaluation of CaLLF


The testbed setup consisted of the CaLLF and the RR scheme that served as a
benchmark. These load balancing approaches were implemented and incorporated in a
Synapse engine (Synapse, 2012). The synapse engine served as an SMS broker
responsible for scheduling load to service provider servers. The Synapse engine comes
with a set of transport, mediator and standard brokering capabilities, such as round-
robin, weighted round robin load balancing scheme and fail-over. The capabilities of
Synapse engine accounted for its use as a load balancer. We used it to deploy CaLLF.
Table 1 presents characterization of the CaLLF and the RR.

Table 1: Characterization of the CaLLF and the RR


Feature CaLLF Round Robin
Nature Dynamic Static
Adaptability More adaptive Less adaptive
Centralised Yes Yes
Load balancing policy Least loaded scheme Round Robin scheme
Overhead More Less
Simplicity No Yes

To mimic the SMS-based service invocation environment, we used 3 machines


running Windows 7, each serving some purpose. The first machine was used to simulate
clients via Apache HTTP load generator. For simulation of client’s requests, we chose a
machine that is fast so that heavy loads can be imposed on other machines that acts as
servers (Balasubramanian, Schmidt, 2004). To further avoid potential resource
constraints, the Synapse engine was deployed on the same machine where the client’s
simulator (load generator) was running.
The machine that was used for simulation of client HTTP requests and hosting the
Synapse engine was running on an Intel Core i5 3.20 GHz PC with a RAM of 3GB and
Hyper-Threading Technology. The second and third machines were homogeneous
servers running on Intel Core2Duo 2.94GHz PC with RAM of 2GB. Each of these two
machines was running Apache Axis web service engine coordinated by the Apache
Synapse engine running on the first machine. The servers are used for serving requests
coming from the client simulator machine that generated the web service requests.
These requests were distributed among the servers using Apache Synapse engine
containing CaLLF and RR scheme for benchmarking purposes. The web service that the
servers were running was a similar computationally intensive service replicated in both
servers. This computational service finds permutations of 5 elements (i.e. given
“abcde”, what are the possible ways that these characters can be ordered?). All these
machines were connected through wired LAN in our research centre.

2) Experimental Design for Scalability Evaluation of CaLLF


We use response time and throughput to investigate how each load balancing scheme
scales with increases in the number of client requests. Response time is measured as the
438 Jokull Journal
Vol 64, No. 6;Jun 2014

time taken from when a request is send to when a response is received. Throughput is
defined as the maximum number of requests a load balancing algorithm can process
within a unit time. Further analysis investigated how the CaLLF consumed resource
such as CPU as the workload increases compared to the RR. For each number of
requests, 10 runs were carried out and the above-mentioned metrics were observed. The
averages of each metric over the 10 runs for each number of the request were recorded
against their corresponding number of requests. The metrics at the client side or front
end were obtained as shown in Figure 3. Two overloading variants were considered for
the experiments. In this paper, we only presented experiment involving overloading the
servers interchangeable throughout the process of sending requests. This means one
server is overloaded at certain time while the other is not and the next moment the latter
server is overloaded while the previously overloaded server is relieved.

Deploy one computational service replicated


in two Servers (service endpoint)
Pass n number of requests to the Synapse
engine using Load generator
Record the response time, throughput, and
CPU utilization at each group or bulk requests
after processing completed

Figure 3: Capturing of all necessary metrics for our LBF and RR scheme on distribution of requests and
response.

3) Experimental Result and analysis


This subsection presents scalability experiment and analysis of the result that was
obtained. In this experiment, we evaluated the performance of the CaLLF and RR
schemes in one overloading variant scenario mentioned subsection 2 in this section.
Results of the measured metrics are presented in Figures 4 to 7. Figure 4 shows that the
response time is directly proportional to the number of requests sent. However, we
noted CaLLF has better response time than RR. Moreover, Figure 5 shows that the
throughput of both CaLLF and RR approaches decrease with increases in the number of
client requests. In Figure 5, we observed that the number of requests sent is inversely
proportional to the throughput but CaLLF provides better throughput. However, we
introduced inverse of throughput graph in Figure 6 because the original throughput
graph does not allow for asymptotic analysis hence hindering scalability observation.
Figure 6 shows that the inverse of throughput increases as number of requests
increase for both CaLLF and RR. However, we observed that our CaLLF has better
inverse throughput than RR i.e. CaLLF could process more request at certain period of
time. Figures 4 and 6 show that CaLLF and RR schemes are both scalable. For further
analysis, we investigated computational resource such as CPU to observe how each of
the load balancing solutions utilizes this type of resource. Figure 7 shows that the CPU
utilization increases as the number of client’s requests increases on both the CaLLF and
RR. From Figure 7, we observed, however, that CaLLF has higher CPU utilization than
RR. This is a trade-off for higher capabilities of CaLLF compared with the RR scheme.
Based on above result we conclude CaLLF provide better performance than RR in
cases where SMS-based service invocation environment is dealing with increasing

439 Jokull Journal


Vol 64, No. 6;Jun 2014

traffic. CaLLF achieved better performance but requires higher computational power
trade-off.

Figure 4: Scalability measurement of CaLLF and RR schemes: response time vs. number of requests

Figure 5: Throughput of CaLLF and RR schemes with increasing number of requests

Figure 6: Inverse Throughput of CaLLF and RR schemes with increasing number of requests

440 Jokull Journal


Vol 64, No. 6;Jun 2014

Figure 7: CPU utilization of CaLLF and RR schemes with increasing number of requests

4.3 Utility Evaluation of CaLLF

It is well known that service oriented environment market deals with various types of
clients who need different service quality. In our approach, we assumed these clients are
classified into two: premium clients who need higher service quality and regular clients
who just need best effort service. The CaLLF proposed in Section 3 combines client
prioritization mechanism, over dimensioning of resource and the load balancing (i.e.
least loaded) algorithm so that load balancing approach is aware of the different client
categories when distributing load of requests. In this evaluation, we investigated how
client prioritisation mechanism in CaLLF improves clients satisfaction or otherwise.
Utility is given by service satisfaction measured against some Satisfaction threshold
when the premium and regular client requests are processed and delivered. The
following sub-section presents experimental set-up, design and result.

1) Environment Setup for Utility Evaluation of CaLLF


In order to investigate whether client prioritisation improves client utility, CaLLF was
compared with the non-client-aware LLA. Both CaLLF and the non-client-aware LLA
were incorporated into Synapse engine. We used two (2) machines running windows 7
operating system. The first machine was used to simulate clients HTTP requests
containing a header called priority that specifies classes of requests. This helps in the
identification of the clients whether they are premium or regular. In this work, we
assumed that clients are already subscribers. The load generator which acts as client
simulator invokes stock quote web service hosted by service provider servers. For the
purpose of serving as load generator, we chose a machine that is fast enough to imposed
load on the second machine; which is the server. The load generator or client’s
simulator runs on Intel Core i5 3.20 GHz PC with 3GB RAM. The second machine was
a server running Apache Axis hosting simpleQoute informational service of different
companies such as IBM and MSFT which are made available through the Apache
Synapse engine (Malaika et al. , 2002, Synapse, 2012). The load generator produces
HTTP requests with two priority header. Requests with priority header 7 are classified
as premium and HTTP requests with priority header 4 are classified as regular. This is
for the purpose of prioritization process; to be able to decide which request has
441 Jokull Journal
Vol 64, No. 6;Jun 2014

preference over the other using higher priority value. The server machine was running
on Intel Core2Dou 2.94GHz PC with 2GB RAM. The server is used to handle different
types of web service requests requiring different service quality coming from the load
generator. The load generator machine is responsible for generating requests carrying
service parameters of the stockqoute web service which are sent via Synapse Engine;
containing CaLLF that is responsible to prioritize and distribute the requests to server
rendering the stockqoute web service. For the purposes of benchmarking, CaLLF was
compared to non-client LLA scheme as mentioned earlier.

2) Experimental Design for Utility Evaluation of CaLLF

For the purpose of evaluating utility, a service client creates a Satisfaction threshold, Ti,
where Ti is given by some service quality that is related to response time (ms); required
by certain class of clients for a service provider to complete processing their requests.
Once the service client sends request i to the selected service provider, we observed the
response time Xi (ms) which the service provider delivered. In other words, Xi is the
quality of experience (QoE) associated with any time the service provider took to
complete the requested service and respond to the client.
The service client is said to be satisfied if Xi is less than or equal to (Ti). Otherwise,
the service clients is said to be dissatisfied. The pseudo code for achieving this is shown
in Figure 8.

Pass n number of premium requests


to the Synapse engine
Pass n number of regular requests
to the Synapse engine
Synapse engine in turn relays Axis
server hosting a service
Observe
If ≤ Satisfaction
Threshold (Ti)
then the service client is
satisfied
else the service client is
dissatisfied
Observe all the that satisfied the
service client
Record percentage of satisfied
service client

Figure 8: Comparative Analysis of Utility for CaLLF and non-client-aware LLA.

Before we investigated whether client prioritisation implemented by CaLLF improves


client utility, there was a need for finding an optimal satisfaction threshold level for
premium and regular clients. In order to find suitable satisfaction threshold, we
randomly generated 100 requests. Out of the 100 requests generated, 50 were regular
requests and 50 were premium requests. The requests were serviced through a non-
client-aware LLA and the average response times for each group of requests were
442 Jokull Journal
Vol 64, No. 6;Jun 2014

recorded. The averages observed were 42.70ms for premium requests and 50.79ms for
regular requests. These values were then taken as satisfaction thresholds for the
respective groups. These satisfaction thresholds were used to experimentally determine
satisfaction thresholds that optimised the percentage of satisfied clients for each client
group. Equations (5) and (6) were used to vary the satisfaction threshold in the range
0.5Tx to Tx in 0.05Tx intervals

𝑇𝑣(𝑝𝑟𝑒𝑚𝑖𝑢𝑚 ) = 𝑊 ∗ 𝑇𝑝𝑟𝑒𝑚𝑖𝑢𝑚 (5)

𝑇𝑣 (𝑟𝑒𝑔𝑢𝑙𝑎𝑟 ) = 𝑊 ∗ 𝑇𝑟𝑒𝑔𝑢𝑙𝑎𝑟 (6)

where
Tpremium = 42.70ms is the satisfaction threshold for premium users
Tregular = 50.79ms is the satisfaction threshold for regular users
W is the multiplier weights in the range [0.5 up to 1] at 0.05 interval.

Satisfaction was measured as follows: Given varied satisfaction threshold of either


premium or regular clients, we observed response time of the service provider with the
quality of experience (Xi) and compare it with the current varied satisfaction threshold
(Tv). If Xi is less than or equal to the current Tv then a given client is satisfied. From the
values of satisfaction measured, we obtained percentage of satisfied clients given by the
number of satisfied client divided by original number of client in that class of requests.
The experiments for finding the suitable satisfaction threshold values were done with
different test cases such as running 75 premiums and 25 regular and vice versa or 50
premiums and 50 regular. The results are shown in Figures 9 and 10. Data for Figures 9
(a) and (b) were obtained by running 75 premium and 75 regular clients respectively
while Figures 10 (a) and (b) represent values obtained from running 50 premium and 50
regular clients respectively.
Figure 9 (a) and Figure 10 (a) depict the effect of increasing satisfaction threshold on
the percentage of satisfied premium clients for both CaLLF scheme and non –client-
aware LLA scheme. The graph in Figures 9 (a) and 10(a) show that as varied thresholds
increased the percentage of satisfied clients increases. We observed that the percentage
of satisfied premium clients drastically changes between 0.7*Tpremium and 0.75*Tpremium
and thereafter it becomes constant. This observed drastic change occurred for both
CaLLF and non –client-aware LLA schemes. We conclude, there is satisfaction
threshold in between W = 0.7 and W = 0.75. Thus, we concluded that a satisfaction
threshold of 0.75Tpremium (32.025) optimises the percentage of satisfied premium clients.
We adopted satisfaction threshold value for premium clients as the average value
between threshold value which was acquired by finding average of the varied threshold
between the values at 0.7 Tpremium and 0.75 Tpremium; giving the value 30.96 milliseconds.
Moreover, Figure 9 (b) and Figure 10 (b) illustrate the effect of increasing satisfaction
threshold on the percentage of satisfied regular clients for both CaLLF and non-client-
aware LLA. Steady increases in the percentage of satisfied regular client requests are
observed in two ranges 0.6Tregular - 0.65Tregular and 0.9Tregular - 0.95 Tregular for both

443 Jokull Journal


Vol 64, No. 6;Jun 2014

CaLLF and non-client-aware LLA. Thus, the optimal satisfaction threshold was
observed to be 0.95 Tregular (48.25).

(a)

(b)

Figure 9: Determining optimal satisfaction threshold for users (a) 75% premium clients, (b) 75% regular
clients

444 Jokull Journal


Vol 64, No. 6;Jun 2014

(a)

(b)

Figure 10: Determining optimal satisfaction threshold for users (a) 50% premium clients, (b) 50% regular
clients

445 Jokull Journal


Vol 64, No. 6;Jun 2014

3) Utility Evaluation Result of CaLLF

From subsection 2 in the section above, we discussed experimental design for finding an
optimal satisfaction threshold for both premium and regular clients. This is used as our
input factor for the investigation of whether client prioritisation in CaLLF improved
client utility compared with the non-client-aware LLA. From the process of finding an
optimal satisfaction threshold, it was found that satisfaction threshold for premium
client is 320.2596 ms and for regular is 482.598 ms. This Section presents results of the
utility of CaLLF compared to non-client-aware LLA based on the given clients
satisfaction threshold acquired. The experiment was set up as follows: 100 client
requests were split among premium and regular clients such that the first instance had
90 premium and 10 regular clients. The second instance had 80 premium and 90 regular
clients. This decrement of premium requests by 10 and increment of regular requests by
10 was repeated until regular clients reached 90 and premium clients were 10. Statistics
for percentage of satisfaction and the number of clients was taken for each round. These
requests were handled by CaLLF and non –client-aware LLA for the purpose of
comparing the utilities of load balancing solution that is aware of client’s classes against
load balancing solution which is not. The results are presented in this section.
Figure 11 shows percentage of satisfied clients vs. number of increasing proportion
of premium and decreasing proportion of regular client requests for utility of CaLLF
and non–client-aware LLA. We observed that as premium requests proportion increases,
the satisfaction percentage of CaLLF decreases while the satisfaction percentage for
non–client-aware LLA remains more or less constant. This means that non –client-
aware LLA provides better utility than CaLLF. This is because non –client-aware LLA
gives equal chances to premium and regular requests while CaLLF focuses on meeting
satisfaction level of premium requests while neglecting regular requests; which are best
effort. This affects overall satisfaction percentage of CaLLF because the regular
requests are mostly unsatisfied. We concluded that CaLLF provides better utility for
premium with trade-off for regular requests. In order to prove our conclusion we
decided to present individual satisfaction level for both premium and regular requests
which are shown in Figure 12 and Figure 13.

Figure 11: Comparison of CaLLF’s satisfaction levels for all clients against that of non-client aware LLA
446 Jokull Journal
Vol 64, No. 6;Jun 2014

Figure 12: Comparison of CaLLF’s satisfaction levels for premium clients against that of non-client
aware LLA

Figure 13: Comparison of CaLLF’s satisfaction levels for regular clients against that of non-client aware
LLA

Figure 12 represents the findings of the experiment for percentage of satisfied


premium clients versus number of premium requests. The graph in Figure 12 shows that
CaLLF outperforms non-client-aware LLA in terms of utility of premium client’s
requests because non client-aware LLA takes these requests as of the same type hence
non client-aware LLA is not aware of requests priority or service quality. As shown in
Figure 13, we observed that non –client-aware LLA outperformed CaLLF in terms of
utility of regular client’s requests. This is because CaLLF gives preference to premium
requests over regular requests before distribution; hence regular requests get neglected
for sometimes. This resulted in a longer wait time for regular clients. This confirms the
conclusion made in the observation on Figure 11.

447 Jokull Journal


Vol 64, No. 6;Jun 2014

5. Conclusion and future work


In this paper, we proposed a system that can handle unexpected traffic or sudden load
peak in a scalable manner while maintaining performance requirements in an SMS-
based service invocation environment. Services are created and exposed to be consumed
via service interface chosen by service provider to render their services. The popularity
of SMS has led service providers to use it as service interface because it allows service
provider services to reach massive market. SMS based service invocation is enabled to
reach larger audience (service consumers). It is also enabled in dynamic environment
such as service-oriented environment which is unpredictable. Thus we proposed CaLLF,
a load balancing approach that is aware of different type of clients that participate in
service oriented market for fairness purposes since service orientated environment deals
with various types of clients. Against this background, over dimensioning of resource,
classification and prioritising mechanism were introduced to design CaLLF. The
proposed CaLLF consists of load balancing decision maker, load monitor and client
classifier components. These components are proven to be important in an SMS-based
service invocation environment in terms of handling load while providing adequate
performance. Moreover, combining these components to design CaLLF makes it unique
from other load balancing approaches.
Performance expectation of CaLLF was validated using existing schemes. The
proposed CaLLF and RR were first evaluated in experimentation testbed configurations.
This testbed was used to conduct performance evaluation experiments between the two
approaches. This work is driven by first finding load balancing approach that addresses
the issue of unexpected traffic and sudden load peaks. While this is the main theme, it
was necessary to include client aware mechanism. The second evaluation was meant to
test client awareness of CaLLF compared with non-client-aware LLA. This was done in
order to observe satisfaction level of CaLLF over non client-aware LLA. In the testbed
experimentation, the following evaluation parameters were used: scalability and utility.
Analysis of the results showed that both CaLLF and RR schemes are scalable but
CaLLF had better performance in terms of response time and throughput over RR
scheme. Goyal, Patel (2011) states that, the dynamic load balancing algorithms, one of
which was implemented by CaLLF, are robust and flexible for unpredictable
environments such as distributed systems. Therefore, the experiments showed that
CaLLF has improved performance i.e. handling an increasing amount of load in the
system without degrading performance. The results obtained from the experimental
testbed also revealed that CaLLF brings more satisfaction to a consumer who requires
high service quality (i.e. premium clients) as compared to non-client-aware LLA. In
essence, the evaluations conducted show that CaLLF as proposed in this research is able
to handle increasing load in scalable manner while providing adequate performance. It
is also able to serve clients according to their category either regular or premium clients.
Although the proposed CaLLF has been proved to be a more effective approach in
dynamic environments such as SMS-based service invocation in service oriented
environments, it has some limitations demanding future enhancements. The developed
CaLLF has a higher CPU utilisation in comparison with the RR scheme. This may be
due to the number of capabilities that CaLLF has. Even though this high CPU utilisation
was expected, in future, CaLLF will be fine-tuned in order to achieve the same
improvements which CaLLF bring, while the CPU utilisation values are brought down
to a more satisfactory level. The other limitation is that experiments with CaLLF only
considered homogeneous servers. Future enhancements of CaLLF would be tested
448 Jokull Journal
Vol 64, No. 6;Jun 2014

against RR scheme in heterogeneous environment to compare performances. Moreover,


testing of over dimension of resource approach aspect of CaLLF is part of our future
work. Lastly, an approach for attaining satisfaction thresholds for premium and regular
clients would be addressed, in future work, probably by training or using some other
approach.

Acknowledgement

This work is based on the research supported in part by the National Research
Foundation of South Africa -Grant UID: TP11062500001 (2012-2014)

The authors also acknowledge funds received from industry partners: Telkom SA Ltd,
Huawei Technologies SA (Pty) Ltd and Dynatech Information Systems, South Africa in
support of this research.

References
Al-Raqabani A, Barada H, Benlamri R. Performance of probing and coordinated load sharing. Proc of
the 17th IASTED International Conference on Parallel and Distributed Computing and
Systems2005.
Andrikopoulos V, Benbernou S, Papazoglou MP. On the evolution of services. Software Engineering,
IEEE Transactions on. 2012;38:609-28.
Arapé N, Colmenares JA, Queipo NV. On the Development of an Enhanced Least Loaded Strategy for
the CORBA Load Balancing and Monitoring Service. Proc 16th Int'l Conference on Parallel and
Distributed Computing Systems Reno, Nevada, USA2003.
Babu PD, Amudha T. A Novel Genetic Algorithm for Effective Job Scheduling in Grid Environment.
Computational Intelligence, Cyber Security and Computational Models: Springer; 2014. p. 385-93.
Balasubramanian J, Schmidt DC, Dowdy L, Othman O. Evaluating the performance of middleware load
balancing strategies. Enterprise Distributed Object Computing Conference, 2004 EDOC 2004
Proceedings Eighth IEEE International: IEEE; 2004. p. 135-46.
Bic L, Shaw AC. Operating systems principles: Prentice Hall; 2003.
Bonald T, Roberts J. Performance modeling of elastic traffic in overload. ACM SIGMETRICS
Performance Evaluation Review: ACM; 2001. p. 342-3.
Boone B, Van Hoecke S, Van Seghbroeck G, Joncheere N, Jonckers V, De Turck F, et al. SALSA: QoS-
aware load balancing for autonomous service brokering. Journal of Systems and Software.
2010;83:446-56.
Bourke T. Server load balancing: O'Reilly Media, Inc.; 2001.
Brown J, Shipman B, Vetter R. SMS: The short message service. Computer. 2007;40:106-10.
Cardellini V, Casalicchio E, Colajanni M, Yu PS. The state of the art in locally distributed Web-server
systems. ACM Computing Surveys (CSUR). 2002;34:263-311.
Cardellini V, Colajanni M, Philip SY. Dynamic load balancing on web-server systems. IEEE Internet
computing. 1999a;3:28-39.
Cardellini V, Colajanni M, Yu PS. Dynamic load balancing on web-server systems. Internet Computing,
IEEE. 1999b;3:28-39.
Channabasavaiah K, Holley K, Tuggle E. Migrating to a service-oriented architecture. IBM
DeveloperWorks. 2003;16.
Cherkasova L, Phaal P. Session-based admission control: A mechanism for peak load management of
commercial web sites. Computers, IEEE Transactions on. 2002;51:669-85.
Cheung AKY, Jacobsen H-A. Dynamic load balancing in distributed content-based publish/subscribe:
Springer; 2006.
449 Jokull Journal
Vol 64, No. 6;Jun 2014

Chhabra A, Singh G. Qualitative Parametric Comparison of Load Balancing Algorithms in Distributed


Computing Environment. Advanced Computing and Communications, 2006 ADCOM 2006
International Conference on: IEEE; 2006. p. 58-61.
Christodoulopoulos K, Varvarigos E, Develder C, De Leenheer M, Dhoedt B. Job demand models for
optical grid research. Optical Network Design and Modeling: Springer; 2007. p. 127-36.
Cortes A, Ripoll A, Senar M, Luque E. Performance comparison of dynamic load-balancing strategies for
distributed computing. System Sciences, 1999 HICSS-32 Proceedings of the 32nd Annual Hawaii
International Conference on: IEEE; 1999. p. 10 pp.
Elfatatry A, Layzell P. Negotiating in service-oriented environments. Communications of the ACM.
2004;47:103-8.
Erol O, Sauser B, Boardman JT. Creating Enterprise Flexibility Through Service-Oriented Architecture.
The Flexible Enterprise: Springer; 2014. p. 27-36.
Fco L. Javier, Martinez Julio César.,(June, 2007)“Improving Dynamic Load Balancing Under CORBA
With a Genetic Strategy in a Neural System of Off-line Signature Verification”. The 2007
International Conference on Parallel and Distributed Processing Techniques and Applications In
Computer Science & Computer Engineering2007.
Gondhi NK, Pant D. An evolutionary approach for scalable load balancing in cluster computing.
Advance Computing Conference, 2009 IACC 2009 IEEE International: IEEE; 2009. p. 1259-64.
Goyal SK, Patel R, Singh M. Adaptive and dynamic load balancing methodologies for distributed
environment: a review. International Journal of Engineering Science and Technology (IJEST).
2011;3:1835-40.
Grajales III FJ, Sheps S, Ho K, Novak-Lauscher H, Eysenbach G. Social media: a review and tutorial of
applications in medicine and health care. Journal of medical Internet research. 2014;16:e13.
Grosu D, Chronopoulos AT, Leung M-Y. Load balancing in distributed systems: An approach using
cooperative games. Parallel and Distributed Processing Symposium, Proceedings International,
IPDPS 2002, Abstracts and CD-ROM: IEEE; 2002. p. 52-61.
Guo H. A Bayesian approach for automatic algorithm selection. IJCAI 2003 Workshop on AI and
Autonomic Computing, Mexico: Citeseer; 2003. p. 1-5.
Hahne EL. Round-robin scheduling for max-min fairness in data networks. Selected Areas in
Communications, IEEE Journal on. 1991;9:1024-39.
Hahne EL, Gallager RG. Round robin scheduling for fair flow control in data communication networks.
DTIC Document; 1986.
Jędrzejowicz P, Ratajczak-Ropel E. Reinforcement Learning Strategy for Solving the Resource-
Constrained Project Scheduling Problem by a Team of A-Teams. Intelligent Information and
Database Systems: Springer; 2014. p. 197-206.
Jones NB, Graham C. Practices and Tools in Online Course Delivery. Learning Management Systems and
Instructional Design: Best Practices in Online Education. 2013:288.
Jongtaveesataporn A, Takada S. Enhancing enterprise service bus capability for load balancing. WSEAS
Transactions on Computers. 2010;9:299-308.
Jun L. Mobilized by Mobile Media. How Chinese People use mobile phones to change politics and
democracy: University of Copenhagen. Faculty of Humanities; 2013.
Karasan E, Ayanoglu E. Effects of wavelength routing and selection algorithms on wavelength
conversion gain in WDM optical networks. Networking, IEEE/ACM Transactions on. 1998;6:186-
96.
Kopparapu C. Load balancing servers, firewalls, and caches: John Wiley & Sons; 2002.
Kunz T. The influence of different workload descriptions on a heuristic load balancing scheme. Software
Engineering, IEEE Transactions on. 1991;17:725-30.
Lin MTNG, Silva TPC, dos Santos RO, da Silva Neto AF. Smbots-an architecture to manage dynamic
services based on sms. Mobile Data Management (MDM), 2010 Eleventh International Conference
on: IEEE; 2010. p. 311-3.
Malaika S, Nelin CJ, Qu R, Reinwald B, Wolfson DC. DB2 and Web services. IBM Systems Journal.
2002;41:666-85.
Mühl G, Fiege L, Pietzuch P. Distributed event-based systems: springer Heidelberg; 2006.
Muppala S, Zhou X. Coordinated session-based admission control with statistical learning for multi-tier
internet applications. Journal of Network and Computer Applications. 2011;34:20-9.
Nikravan M, Kashani M. A genetic algorithm for process scheduling in distributed operating systems
considering load balancing. Proceedings 21st European Conference on Modelling and Simulation
Ivan Zelinka, Zuzana Oplatkova, Alessandra Orsoni, ECMS2007.
450 Jokull Journal
Vol 64, No. 6;Jun 2014

Othman O, Balasubramanian J, Schmidt DC. The design of an adaptive middleware load balancing and
monitoring service. LNCS/LNAI: Proceedings of the Third International Workshop on Self-
Adaptive Software2003.
Papazoglou M. Web services: principles and technology: Addison-Wesley; 2008.
Parent J, Verbeeck K, Lemeire J. Adaptive Load Balancing of Parallel Applications with Reinforcement
Learning on Heterogenous Networks. 2002.
Petrovic S, Fayad C. A genetic algorithm for job shop scheduling with load balancing. AI 2005:
Advances in Artificial Intelligence: Springer; 2005. p. 339-48.
Piroozfard H, Hassan A, Moghadam AM, Derakhshan Asl A. A Hybrid Genetic Algorithm for Solving
Job Shop Scheduling Problems. Advanced Materials Research. 2014;845:559-63.
Qin X, Jiang H, Zhu Y, Swanson DR. Dynamic load balancing for I/O-intensive tasks on heterogeneous
clusters. High Performance Computing-HiPC 2003: Springer; 2003. p. 300-9.
Ramana K, Ghatage M. Load balancing of services with server initiated connections. Personal Wireless
Communications, 2005 ICPWC 2005 2005 IEEE International Conference on: IEEE; 2005. p. 254-
7.
Rao A, Lakshminarayanan K, Surana S, Karp R, Stoica I. Load balancing in structured P2P systems.
Peer-to-Peer Systems II: Springer; 2003. p. 68-79.
Revar A, Andhariya M, Sutariya D, Bhavsar M. Load balancing in grid environment using machine
learning-innovative approach. International Journal of Computer Applications. 2010;8:31-4.
Risi D, da S T, Ricardo M, Silva TPC. GEMS: SMS-based app store for Growth Economies. Consumer
Communications and Networking Conference (CCNC), 2013 IEEE: IEEE; 2013. p. 855-6.
Risi D, Teófilo M. MobileDeck: turning SMS into a rich user experience. Proceedings of the 6th
International Conference on Mobile Technology, Application & Systems: ACM; 2009. p. 33.
Romtveit T. Load-balancing by Applying a Bayesian Learning Automata (BLA) Scheme in a Non-
stationary Web-crawler Network: T. Romtveit; 2010.
Sang J, Yang E. Weighted round robin cell architecture. Google Patents; 2003.
Saxena N, Chaudhari NS. EasySMS: A Protocol for End-to-End Secure Transmission of SMS. 2013.
Shen G, Bose SK, Cheng TH, Lu C, Chai TY. Efficient wavelength assignment algorithms for light paths
in WDM optical networks with/without wavelength conversion. Photonic Network
Communications. 2000;2:349-59.
Sherif MH, Seo D. Government role in information and communications technology innovations.
International Journal of Technology Marketing. 2013;8:4-23.
Shirazi BA, Kavi KM, Hurson AR. Scheduling and load balancing in parallel and distributed systems:
IEEE Computer Society Press; 1995.
Song S, Lv T, Chen X. A Static Load Balancing algorithm for Future Internet. TELKOMNIKA
Indonesian Journal of Electrical Engineering. 2014;12.
Sonntag S, Reinig H. An Efficient Weighted-Round-Robin Algorithm for Multiprocessor Architectures.
Simulation Symposium, 2008 ANSS 2008 41st Annual: IEEE; 2008. p. 193-9.
Sprott D, Wilkes L. Understanding service-oriented architecture. The Architecture Journal. 2004;1:10-7.
Synapse. Apache Synapse Project. Apache software foundation; 2012.
Teixeira MM, Santana MJ, Santana RH. Using adaptive priority scheduling for service differentiation
QoS-aware Web servers. Performance, Computing, and Communications, 2004 IEEE International
Conference on: IEEE; 2004. p. 279-85.
Teófilo M, Cavalcanti L, de Lucena VF. A SMS-based application store for emerging market: a case
study. SIGGRAPH Asia 2013 Symposium on Mobile Graphics and Interactive Applications: ACM;
2013. p. 66.
Wang J, Chen J-w, Wang Y-l, Zheng D. Intelligent load balancing strategies for complex distributed
simulation applications. Computational Intelligence and Security, 2009 CIS'09 International
Conference on: IEEE; 2009. p. 182-6.
Weintraub G, Ophir S, Biran O, McElhinney D, Ben-Yehuda I. Mobile roaming prepaid solutions.
Google Patents; 2013.
Wenzheng L, Hongyan S. Novel algorithm for load balancing in cluster systems. Computer Supported
Cooperative Work in Design (CSCWD), 2010 14th International Conference on: IEEE; 2010. p.
413-6.
Yan C, Zhu M, Shi Y. A Response Time based Load Balancing Algorithm for Service Composition.
Pervasive Computing and Applications, 2008 ICPCA 2008 Third International Conference on:
IEEE; 2008. p. 13-6.

451 Jokull Journal


Vol 64, No. 6;Jun 2014

Yigitbasi N, Epema D. Overdimensioning for consistent performance in grids. Proceedings of the 2010
10th IEEE/ACM International Conference on Cluster, Cloud and Grid Computing: IEEE Computer
Society; 2010. p. 526-9.
Zerfos P, Meng X, Wong SHY, Samanta V, Lu S. A study of the short message service of a nationwide
cellular network. Proceedings of the 6th ACM SIGCOMM conference on Internet measurement.
Rio de Janeriro, Brazil: ACM; 2006. p. 263-8.
Zhang Y, Harrison P. Performance of a priority-weighted round robin mechanism for differentiated
service networks. Computer Communications and Networks, 2007 ICCCN 2007 Proceedings of
16th International Conference on: IEEE; 2007. p. 1198-203.

452 Jokull Journal

Das könnte Ihnen auch gefallen