Beruflich Dokumente
Kultur Dokumente
1 Introduction
A case study using Amazon EC2 was carried out for which an agent-based system
prototype was implemented using (i) the java agent development framework (JADE)
[6], (ii) Bouncy Castle Crypto APIs [7] (for encrypting and decrypting Cloud
resources’ passwords), and (iii) Amazon SDK for Java [5].
The Cloud resource ontology (Fig. 2) was provided with Cloud resource definitions
based on (i) Amazon instance types and (ii) Amazon machine images (AMIs), for a
total of 58 different Cloud resource definitions that resulted from the valid
combinations (i.e., not all the AMIs can be run on a given instance type) between
instance types and AMIs (Table 1).
The agents involved in the case study were 1 RA, 5 BAs, 5 SPAs, and 2500 RAs.
Each agent, either CAs, BAs, or SPAs was deployed on a different JADE agent
container (see Fig. 1 and Fig. 4), i.e., an instance of a JADE runtime environment. In
addition, since RAs do not interact among themselves, all the RAs were deployed on
a single container (Container-1 in Fig. 4). In doing so, SPAs had to contact RAs
located at a remote location, and an unnecessary large number of containers was
avoided in the system prototype. The Cloud resource type of the RAs was randomly
selected from the available 58 Cloud resource types (Table 1). Moreover, all the RAs
were randomly assigned to the SPAs to simulate a scenario with highly heterogeneous
Cloud providers.
Fig. 4. JADE sniffer agent showing an agent-based Cloud resource allocation scenario.
All the agent containers must be and were registered in a main JADE container that
manages and supports the agent-based platform by (i) handling asynchronous
message passing communication through Java RMI and IIOP, (ii) starting and killing
agents, and (iii) providing services such as: a directory facilitator agent (Cloud
directory), a sniffer agent, a remote management agent, etc., see [6] for details of
JADE.
Fig. 5. AWS management console – Key pairs option.
The CA was provided with a Cloud resource allocation request composed of 6 Cloud
resources: 4 m1.small instances with an AMI ami-8c1fece5 (Basic 32-bit Amazon
Linux AMI 2011.02.1 Beta) and 2 m1.large instances with an AMI ami-8e1fece7
(Basic 64-bit Amazon Linux AMI 2011.02.1 Beta).
Fig. 7. Console output for agent-based resource allocations using Amazon SDK.
The CA submitted the allocation request to the 5 BAs by using the CNP. The
selected BA executed 6 CNP (one for each Cloud resource to be allocated) with the 5
SPAs in a parallel manner. Finally, the selected SPAs requested the Cloud resource
allocations to their RAs. Fig. 4 shows an extract of the messages exchanged among all
the agents to carry out the Cloud resource allocation request, which was fulfilled by
agent BA4 (a BA selected by consumer CA1). The messages received by agent BA4
(Fig. 4) came from the SPAs bidding for allocating a Cloud resource and/or providing
data to access the recently allocated Cloud resources, e.g., the messages exchanged
between agents SPA1 and BA4, see Fig. 4. In addition, as soon as allocation data
(public IP address and password to access a given Cloud resource) was received,
broker BA4 forwarded the data to consumer CA1, as shown in the bottom of Fig. 4.
The interleaving of messages received by agent BA4 from all the SPAs (see Fig. 4) is
the result of the parallel execution of CNPs for allocating Cloud resources.
The RAs were provided with (i) Amazon EC2 API tools to handle Cloud resource
allocations, and (ii) Amazon AWS security credentials to access Amazon EC2. It
should be noted that although the RAs shared the same security credentials (i.e., all
the RAs accessed Amazon EC2 using the same Amazon AWS account), sharing the
credentials had no advantageous effects on the agent-based Cloud resource allocation
approach.
When the RAs received the SPAs’ requests to allocate Cloud resources, the RAs
created new RSA key pairs to access Amazon EC2 (Fig. 5). The key pairs were
automatically named based on the identifiers of the RAs that allocated the Cloud
resources, e.g., newKeyRA2462 (see the left side of Fig. 5). Right afterwards, the RAs
proceeded to allocate the Cloud resources (Fig. 6) corresponding to the CA’s initial
allocation request (consisting of 6 Cloud resources, see Section 3.3).
The console output of the agent-based system (Fig. 7) corresponding to the CA’s
allocation request shows: (i) JADE initialization messages displaying agent
containers’ addresses and names, (ii) self-generated output messages derived from the
creation of key pairs by using Amazon SDK, and (iii) self-generated output messages
derived from the Amazon instance allocations by using Amazon SDK. In general, the
self-generated output messages contained the following information: timestamp, key
pair name, AWS access key, type of instance allocated, etc., see Fig. 7 for details.
Since Amazon instances take some time to be fully functional (i.e., to start) and the
delay time may vary due to the size of AMIs, number of instances to be allocated,
among other factors [2], the RAs were continuously checking (every 200 s) whether
Amazon instances were up and running by retrieving the console output of the
recently allocated instances as indication of the start of the instances. Once the RAs
detected an output in the instances’ console, the RAs proceeded to extract the public
IP addresses and passwords (only possible when the instances are up and running),
which were forwarded to their corresponding SPAs.
4 Related Work
Resource allocation mechanisms have been widely investigated (see [12]). However,
little attention has been directed to (i) Cloud resource allocation in multi-Cloud
environments, and (ii) to actual implementations of autonomous Cloud resource
allocation mechanisms. Whereas current Cloud management systems (see [8], [10],
and [14]) may allocate Cloud resources from different Clouds to execute consumers’
applications, no explicit consideration of autonomous Cloud resource selection based
on fees associated to Cloud resources has been made. In contrast, this present work
uses both the agent paradigm and the CNP to (i) sample Cloud resources’ hourly cost
rates, and (ii) allocate Cloud resources in multi-Cloud environments in an
autonomous and dynamic manner.
In addition, the proposed agent-based Cloud resource allocation mechanism is fully
distributed, in contrast to centralized allocation mechanisms (see [4]) that require a
central control entity (allocator) that commonly becomes a system bottleneck.
The contributions of this paper are as follows. (i) Devising the earliest (to the best of
the authors’ knowledge) agent-based Cloud architecture for resource allocation in
multi-Cloud environments, and (ii) implementing and deploying the agent-based
resource allocation mechanism in commercial Clouds using Amazon EC2 as a case
study.
In this work, autonomous agents equipped with the CNP to (i) dynamically sample
hourly cost rates and (ii) support cost-based Cloud resource allocation among self-
interested Cloud participants were used to deal with Cloud resource allocation in
multi-Cloud environments. By using the agent paradigm, Cloud consumers can
efficiently (i.e., with the lowest allocation costs) allocate heterogeneous sets of Cloud
resources from multiple, distributed Cloud providers in a dynamic and autonomous
manner as shown in the Amazon EC2 case study.
Since this work provides the foundations for a general-purpose agent-based multi-
Cloud platform by providing an infrastructure-as-a-service solution (allocated from
multiple Cloud providers) to Cloud consumers, future research directions include: (i)
adding agent capabilities to schedule and execute both workflows and bag-of-tasks
applications in multi-Cloud environments, and (ii) implementing access to more
commercial Cloud providers, such as: GoGrid [9] and RackSpace [11].
References