Automating Application Deployment in

Infrastructure Clouds
Gideon Juve and Ewa Deelman
USC Information Sciences Institute
Marina del Rey, California, USA
{gideon,deelman}@isi.edu

Abstract—Cloud computing systems are becoming an important clouds, clusters and grids are static environments. A system
platform for distributed applications in science and engineering. administrator can setup the required services on a cluster and,
Infrastructure as a Service (IaaS) clouds provide the capability to with some maintenance, the cluster will be ready to run
provision virtual machines (VMs) on demand with a specific applications at any time. Clouds, on the other hand, are highly
configuration of hardware resources, but they do not provide
dynamic. Virtual machines provisioned from the cloud may be
functionality for managing resources once they are provisioned.
In order for such clouds to be used effectively, tools need to be used to run applications for only a few hours at a time. In
developed that can help users to deploy their applications in the order to make efficient use of such an environment, tools are
cloud. In this paper we describe a system we have developed to needed to automatically install, configure, and run distributed
provision, configure, and manage virtual machine deployments in services in a repeatable way.
the cloud. We also describe our experiences using the system to Deploying such applications is not a trivial task. It is
provision resources for scientific workflow applications, and usually not sufficient to simply develop a virtual machine
identify areas for further research. (VM) image that runs the appropriate services when the virtual
Keywords—cloud computing; provisioning; application machine starts up, and then just deploy the image on several
deployment VMs in the cloud. Often the configuration of distributed
services requires information about the nodes in the
I. INTRODUCTION deployment that is not available until after nodes are
Infrastructure as a Service (IaaS) clouds are becoming an provisioned (such as IP addresses, host names, etc.) as well as
important platform for distributed applications. These clouds parameters specified by the user. In addition, nodes often form
allow users to provision computational, storage and a complex hierarchy of interdependent services that must be
networking resources from commercial and academic resource configured in the correct order. Although users can manually
providers. Unlike other distributed resource sharing solutions, configure such complex deployments, doing so is time
such as grids, users of infrastructure clouds are given full consuming and error prone, especially for deployments with a
control of the entire software environment in which their large number of nodes. Instead, we advocate an approach
applications run. The benefits of this approach include support where the user is able to specify the layout of their application
for legacy applications and the ability to customize the declaratively, and use a service to automatically provision,
environment to suit the application. The drawbacks include configure, and monitor the application deployment. The
increased complexity and additional effort required to setup service should allow for the dynamic configuration of the
and deploy the application. deployment, so that a variety services can be deployed based
Current infrastructure clouds provide interfaces for on the needs of the user. It should also be resilient to failures
allocating individual virtual machines (VMs) with a desired that occur during the provisioning process and allow for the
configuration of CPU, memory, disk space, etc. However, dynamic addition and removal of nodes.
these interfaces typically do not provide any features to help In this paper we describe and evaluate a system called
users deploy and configure their application once resources Wrangler [10] that implements this functionality. Wrangler
have been provisioned. In order to make use of infrastructure allows users to send a simple XML description of the desired
clouds, developers need software tools that can be used to deployment to a web service that manages the provisioning of
configure dynamic execution environments in the cloud. virtual machines and the installation and configuration of
The execution environments required by distributed software and services. It is capable of interfacing with many
scientific applications, such as workflows and parallel different resource providers in order to deploy applications
programs, typically require a distributed storage system for across clouds, supports plugins that enable users to define
sharing data between application tasks running on different custom behaviors for their application, and allows
nodes, and a resource manager for scheduling tasks onto nodes dependencies to be specified between nodes. Complex
[12]. Fortunately, many such services have been developed for deployments can be created by composing several plugins that
use in traditional HPC environments, such as clusters and set up services, install and configure application software,
grids. The challenge is how to deploy these services in the download data, and monitor services, on several
cloud given the dynamic nature of cloud environments. Unlike interdependent nodes.

It should also automatically provision.31].20.23.g. Similarly. it is important to continuously deployed. The resource requirements of configuration. or reliability concerns service: demand that an application is deployed across • Automatic deployment of distributed applications. an e-commerce application may deployment service. reliability.12. In the event that a single Broker from the Nimbus cloud management system [15] we cloud provider is not able to supply sufficient have developed the following requirements for a deployment resources for an application. example. Some previous systems for deployments. and port numbers. Unfortunately. of hosts. Section V presents two real user to add and remove nodes from a deployment at applications that were deployed in the cloud using Wrangler. query. it monitor the state of a deployment in order to check for is important that these steps are automated. In order to system are shown in Figure 1. caches. distributed applications across multiple clouds. In the computation. of worker nodes [17. or XML-RPC to architecture consisting of a head node and a collection interact with the coordinator. This capability could be used along with elastic provisioning algorithms (e. file systems. This severely limits • The coordinator is a web service that manages the type of applications that can be deployed. but operation of Wrangler. that can be queried to configure dependent nodes. the services in a distributed application depend We have developed a system called Wrangler to support on one another for configuration values. • Monitoring. A deployment service of the time required to deploy basic applications on several should support dynamic provisioning by enabling the different cloud systems. dependencies. independent data centers. and clouds. II. such as scalability. host names. Clients have the option of using a constructing virtual clusters have assumed a fixed command-line tool. it may become necessary to Distributed applications used in science and provision resources from several cloud providers at engineering research often require resources for short the same time. runtime. and our experience using the Context • Multiple cloud providers. In Section III we explain the design and require more web servers during daylight hours. Distributed systems often should exhibit other characteristics important to distributed consist of many services deployed across a collection systems. ARCHITECTURE AND IMPLEMENTATION Often. or complete an experiment. databases. In order to that must be repeated each time the application is detect these issues. The coordinator stores information distributed applications often change over time. but only a few nodes during the later next section we describe the requirements for a cloud stages. In Section IV we present an evaluation fewer web servers at night. A analyze a large dataset. provisions nodes from cloud providers. web servers. A problems. A virtual application deployments. directed acyclic graph.33]. In order to minimize errors and save time. SYSTEM REQUIREMENTS [19]) to easily adapt deployments to the needs of an Based on our experience running science applications in application at runtime. and terminate. In addition to these functional requirements. a deploy such an application. and agents. This capability is known as federated periods in order to complete a complex simulation. must be configured in the correct order according to • Clients run on each user’s machine and send requests their dependencies. The remainder of this paper is organized as follows. deployment service should support multiple resource This makes them ideal candidates for infrastructure providers with different provisioning interfaces. and others. This process should be simple and errors occur. and enable nodes to advertise values collects information about the state of a deployment. such as IP the requirements outlined above. and configure the application automatically run these tests and notify the user when on-demand. the system • Complex dependencies. which support on-demand provisioning of should allow a single application to be deployed resources. It accepts requests from cluster provisioning system should support complex clients. to cloud computing or sky computing [16]. Long-running services may encounter Setting up these environments involves many steps problems that require user intervention. the cloud [11. a science application may require many worker nodes during the initial stages of a . These services include batch schedulers. and usability. III. They include: clients. repeatable. node is added or removed. often require complex environments in which to run. a Python API. the nodes and services coordinator. This should be possible as long as the Sections VI and VII describe related work and conclude the deployment’s dependencies remain valid when the paper. A deployment service should make it easy deployment service should enable a user to describe for users to specify tests that can be used to verify that the nodes and services they require. which can be expressed as a to the coordinator to launch. and then a node is functioning properly. and acts as an information broker to aid application • Dynamic provisioning. The components of the addresses. For about its deployments in an SQLite database.

will be available for the clients to mount when they are A. services and functionality that should be coordinator first validates the request to ensure that there are implemented by the node. They are invoked by the agent to are part of a “clients” group. Adding additional and instance types are specified for the server and the clients. Figure 2: Example request for 4 node virtual cluster The agent is responsible for collecting information with a shared NFS file system about the node (such as its IP addresses and directory. to machines. configuring the node with the software /mnt directory as /nfs/data. The transient errors occur during provisioning. Each node has one or more plugins.sh” plugin contains a <ref> tag. Users specify their deployment using a simple XML B. replaced with the IP address of the server node at runtime and • Plugins are user-defined scripts that implement the used by the clients to mount the NFS file system. and depend on the server node. which enable the user to configure the plugin. the behaviors. Deployment Process format.. configure and monitor a node. and different images Eucalyptus [24]. virtual machines. automatically retries the request. The clients behavior of a node. provider. and 3 NFS client nodes.sh” hostnames). In the event that network timeouts and other An example deployment is shown in Figure 2.sh”> <param name="SERVER"> <ref node="server" attribute="local-ipv4"> </param> <param name=”PATH”>/mnt</param> <param name=”MOUNT”>/nfs/data</param> </plugin> Figure 1: System architecture <depends node=”server”/> </node> </deployment> • Agents run on each of the provisioned nodes to manage their configuration and monitor their health.sh”> <param name="EXPORT">/mnt</param> </plugin> </node> <node name=”client” count=”3” group=”clients”> <provider name=”amazon”>  <instance-type>m1. Then it contacts the resource providers specified in the may be members of a named group. and each node may request and provisions the appropriate type and quantity of depend on zero or more other nodes or groups. the coordinator example describes a cluster of 4 nodes: 1 NFS server node. </provider> <plugin script=”nfs_client. </provider> <plugin script=”nfs_server. and OpenNebula [25].sh” plugin. Plugins can have multiple no errors. including the VM image to use and the hardware resource as well as any plugins used.. which are identical. It checks that the request is valid. Each XML request document describes a deployment Here we describe the process that Wrangler goes through consisting of several nodes. and dependencies can be resolved. are The coordinator is designed to support many different specified as a single node with a “count” of three. and monitoring the “nfs_client. which starts NFS services and mounts the server’s coordinator. The “SERVER” parameter of the and services specified by the user.. The clients. Upon receiving a request from a client. are to be provisioned from Amazon EC2. All nodes cloud providers. and that no dependency cycles are passed to the script when it is executed on the node. Each node in a which ensures that the NFS file system exported by the server deployment can be configured with multiple plugins. which functionalities that a cloud interface must provide are the starts the required NFS services and exports the /mnt . It currently supports Amazon EC2 [1]. The client sends a request to the coordinator that characteristics of the virtual machine to be provisioned— includes the XML descriptions of all the nodes to be launched.small</instance-type> . and defines the Request. Nodes exist. This parameter is node for failures. which correspond to virtual to deploy an application. which define the Provisioning.xlarge</instance-type> . or add nodes to an existing deployment. <deployment> <node name=”server”> <provider name=”amazon”>  <instance-type>c1. that all parameters. resource provider to use for the node. The clients are configured with an “nfs_client. Specifying Deployments configured. Each node has a provider that specifies the cloud termination. from the initial request. The only The server is configured with an “nfs_server. reporting the state of the node to the plugin.. providers is designed to be relatively simple. The request can create a new type—as well as authentication credentials required by the deployment.

several nodes. After checking all the connectivity between nodes so that an application can be plugins. but can be any executable program that conforms to the then the agent aborts the configuration process and reports the required interface. or Sun Grid registration message from a node it checks to see if the node Engine [8]. resolving any <ref> parameters that Plugins are typically shell. At that point. may be present. It is invoked when the . as the availability zone. Once the agent has retrieved this Several plugins can be combined to define the behavior of a information. For each plugin. The agent passes makes sure that they have registered. In the future we plan to investigate ways to install the provider to terminate the node(s). such as Condor [18]. C. periodically monitors the node by invoking all the node’s The system does not assume anything about the network plugins with the status command. agent at runtime to avoid this issue. as well as any other plugins that install software used by the application. This requires the agent software to be Termination. After a node has been configured. the coordinator sends a request to the most useful plugins. and the node’s Plugins are the modular components of a deployment. then the Plugins are implemented as simple scripts that run on the coordinator waits until all dependencies have been configured nodes to perform all of the actions required by the application. application hostnames and IP addresses of the node. the coordinator enables the coordinator to manage a larger set of nodes. such as The attributes collected include: the public and private service plugins that start daemon processes. user-specified parameters. then coordinator can communicate with agents and vice versa. If they have not. application-specific behaviors required of a node. Python. For example. many different types of plugins that can be created. then the agent reports the node’s status as configuration variables that can be used to customize the ‘configured’ to the coordinator. commands that tell the plugin what to do: start. the monitoring plugins that validate the state of the node. or Ruby scripts. It plugin to implement the plugin lifecycle. the coordinator checks to see if there are any plugin as environment variables when the plugin is invoked. and that all dependencies commands to the plugin as arguments. and The configuration process is complete when all agents status. at which point the user must between the agent and the plugin. Upon receiving this request. the node is ready to be configured. to deploy many different types of compute has any dependencies. the list of plugins for the node. This interface defines the interactions failure to the coordinator. The is that it offloads the majority of the configuration and request can specify a single node. and the node’s status is set to different applications. The agent passes parameters to the configured. nodes that depend on the newly configured node. agent to configure the node. disadvantage is that it requires users to re-bundle images to and the agents send stop commands to all of their plugins. stop. which deployment. client plugins can be combined with plugins for different batch Configuration. the agent custom contextualization data to a VM. and the coordinator contacts the cloud images. PBS [26]. which is not a simple task for Once the plugins are stopped. starts the agent process. If all plugins were components: parameters and commands. the agents report their status to many users and makes it more difficult to use off-the-shelf the coordinator. modify. and reuse custom downloads and invokes the associated plugin script with the plugins. Plugins When the agent starts. Commands are specific actions that must be performed by the then the coordinator attempts to configure them as well. They are specified in the XML request Upon receiving a message that the node has been document described above. the error messages are sent to the coordinator and the node’s Startup and Registration. When the user is ready to terminate one or pre-installed in the VM image. ID assigned to the node by the coordinator. The only requirement is that the attributes for the node. and the ability to pass Monitoring. it status is set to ‘failed’. We envision that there could be a repository for the already been configured. If any of the plugins report errors. The advantage of this approach more nodes. they send a request to the coordinator. it uses a provider-specific adapter Plugins are user-defined scripts that implement the to retrieve contextualization data passed by the coordinator. There are three have been configured. and to configure the node. a message is sent to the coordinator with updated deployed across many clouds. or an entire monitoring tasks from the coordinator to the agent. and the host and port where the coordinator can be contacted. such configuration plugins that apply application-specific settings. relevant information available from the metadata service. the agent This enables users to easily define. They are transferred from the client (or potentially a After the agent receives a command from the coordinator repository) to the coordinator when a node is provisioned. it contacts the coordinator to retrieve from the coordinator to the agent when a node is configured. If all the node’s dependencies have clusters. There are and to collect attributes about the node and its environment. When the VM boots up. When the coordinator receives a schedulers. Parameters are the successfully started. before proceeding. • The start command tells the plugin to perform the behavior requested by the user. If the plugin fails with a non-zero exit code. Perl. behavior of the plugin. and well-designed plugins can be reused for many registration message. and involves two intervene to correct the problem. security credentials. report to the coordinator that they are configured. The sends messages to the agents on all nodes to be terminated.ability to launch and terminate VMs. NFS server and NFS ‘registered’. If there are. include the agent software. it is sent to the coordinator as part of a node. The contextualization data includes: data plugins that download and install application data.

This plugin generates a configuration file and starts FutureGrid’s Sierra cloud [7]. Only plugins that must the nodes that are collected during registration. the output assumes that the cloud provider’s provisioning service of the plugin is collected and sent to the coordinator to provides the capability to securely transmit the agent’s key to simplify debugging and error diagnosis. Authentication implement this command. Deployment with no plugins D. on all condor_master process is running when it receives the status three clouds. EC2 uses a proprietary cloud the condor_master process when it receives the start management system. We used identical the stop command.5 VM images. The plugin can advertise node attributes by writing key=value pairs to a file specified by the agent in an IV. such as IP be shut down gracefully need to implement this addresses. Only plugins that need to monitor the state components of the system. can be configured using named groups. a node can depend several nodes echo > /etc/condor/condor_config. Co-dependent groups enable a limited form of cyclic dependencies and are useful for deploying some peer-to-peer Figure 3: Example plugin used for Condor workers. We conducted experiments on three separate clouds: A basic plugin for Condor worker nodes is shown in Amazon EC2. EVALUATION environment variable. Upon failure. coordinator for each node. This command can be used. or applications using Wrangler. we conducted a few basic use to mount the file system. These types of groups are useful elif [ “$1” == “stop” ]. then Second. These attributes are merged with the The performance of Wrangler is primarily a function of the node’s existing attributes and can be queried by other nodes in time it takes for the underlying cloud management system to the virtual cluster using <ref> tags or a command-line tool. and checks to make sure that the CentOS 5. and the m1. and for all nodes to register with the coordinator. such as parallel file systems and distributed echo "CONDOR_HOST not specified" caches. to query and respond to attributes updated by other nodes. command. Dependencies and Groups The first experiment we performed was provisioning a Dependencies ensure that nodes are configured in the simple vanilla cluster with no plugins. Authentication of clients is of the node or long-running services need to accomplished using a username and password.pid the application from being deployed. This command is invoked group have registered. Groups are exit 1 fi used for two purposes. Nodes in a co- • The stop command tells the plugin to stop any running dependent group are not configured until all members of the services and clean up. of the nodes in the group have been configured. node is being configured. When a dependency exists provider. while Magellan and Sierra both use the command. then for services such as Memcached clusters where the clients kill –QUIT $(cat $PIDFILE) need to know the addresses of each of the Memcached nodes. configuration. groups that depend on themselves form co-dependent kill -0 $(cat $PIDFILE) fi groups. then collective service. All plugins should Nodes that depend on a group are not configured until all implement this command. the dependent node will not be configured until the other node has been configured.#!/bin/bash -e valid as long as they do not form a cycle that would prevent PIDFILE=/var/run/condor/master. Dependencies are . between two nodes. elif [ “$1” == “status” ]. systems and parallel file systems that require each node implementing the service to be aware of all the others.local <<END at once by specifying that it depends on the group. start the VMs. each VM during provisioning. kills the condor_master process when it receives Eucalyptus cloud management system [24]. This authentication mechanism then the node’s status is set to failed. The status command can be used experiments to determine the overhead of deploying to periodically update the attributes advertised by the node. First. of agents is done using a random key that is generated by the If at any time the plugin exits with a non-zero exit code. then Applications that deploy sets of nodes to perform a if [ "$CONDOR_HOST" == "" ]. and Figure 3. With that in mind. Security example. This experiment correct order so that services and attributes published by one measures the time required to provision N nodes from a single node can be used by another node. This ensures that the basic attributes of before the node is terminated. for E. This is CONDOR_HOST = $CONDOR_HOST simpler than specifying dependencies between the node and END $SBIN/condor_master –pidfile $PIDFILE each member of the group. are available to all group members during command. and breaks the deadlock that would otherwise • The status command tells the plugin to check the state occur with a cyclic dependency. NERSC’s Magellan cloud [22]. to verify that a service started by the plugin Wrangler uses SSL for secure communications between all is running.large instance type. SBIN=/usr/local/condor/sbin if [ “$1” == “start” ]. A. of the node for errors. an NFS server node can advertise the address of time for nodes to register and be configured in the correct and path of an exported file system that NFS client nodes can order. Wrangler adds to this a relatively small amount For example.

Deployment for workflow applications evaluated several different storage configurations that can be used to share data for workflows on Amazon EC2.1) on Sierra. 4. N NFS clients to successfully mount the shared file system. Due to we have used for executing real workflow applications in the the large number of experiments required. dev. The majority of this time is spent binaries on each worker node. which systems in order to communicate data products among nodes virtually guaranteed that at least 1 out of every 16 VMs failed. In most cases we observe that the provisioning time for a virtual cluster is comparable to the time required to provision one VM. it was not possible to deploy the manages the workflow and stores data.8 s 55.2 s 111.9 s 112.7 sec (std. and PVFS) in six workflow management system [6].3 s Sierra 371. Condor different configurations using three different applications and [18]. network and services caused by the larger number of simultaneous requests.5 s 433. NFS.1 s 131.4 sec (std. DAGMan [5]. comparing Table I and Table II. By plugins in different combinations to complete the study. This deployment sets up a Condor pool the nodes have registered. several outlier VMs that took much longer than expected to start. 10. The deployment consists of downloading and installing software. Although these applications are scientific workflows.5 s Magellan 173. Amazon 101.8) on EC2. This study In the next experiment we again launch a deployment required us to deploy workflows using four parallel storage using Wrangler. which we measured to be 55.1 s 185.0 s 455. and 428. Using Wrangler we were able to that execute workflow tasks as shown in Figure 5.9 s FAIL Table II: Provisioning time for a deployment used for workflow applications. 2 4 8 16 Nodes Nodes Nodes Nodes Figure 5: Deployment used for workflow applications. such . and N worker nodes environments manually. depending on the target cloud and with a shared GlusterFS file system and installs application the number of nodes. repeatable deployments by composing The results of this experiment are shown in Table II. First. 104.8 s Sierra 447.6 s 102. EXAMPLE APPLICATIONS application-specific plugins. dev. but this time we add plugins for the Pegasus systems (Amazon S3.2 s 98. Recently we conducted a study [12] that B.5 s FAIL The results of this experiment are shown in Table I. Note that we were not able to collect A. Data Storage Study data for Sierra with 16 nodes because the failure rate on Sierra Many workflow applications require shared storage while running these experiments was about 8%.6 s 206. and NFS to create an environment that is similar to what four cluster sizes—a total of 72 different combinations. 88. and waiting for all the three tiers: a master node using a Condor Master plugin. 2 4 8 16 Nodes Nodes Nodes Nodes Amazon 55. Table I: Mean provisioning time for a simple deployment with no plugins.9 sec (std.5 s 112. For larger clusters we observe that the provisioning time is up to twice the maximum observed for one VM. file system client. The deployment consists of a master node that of the configurations. and the complexity cloud [12]. The file system nodes form a group so In this section we describe our experience using Wrangler that worker nodes will be configured after the file system is to deploy scientific workflow applications. which added 1-2 seconds onto the total provisioning time for each node. possibly due to the increased load on the provider’s as web applications.6 s 69.2) on Magellan.7 s Magellan 101.7 s 500. we can see it takes on the The deployments used in the study were similar to the one order of 1-2 minutes for Wrangler to run all the plugins once shown in Figure 4. on Magellan and Sierra there were Figure 4: Deployment used in the data storage study. create automatic. and distributed databases. In the future we plan to investigate ways to provision VMs in parallel to reduce this overhead. could be deployed as easily. and V. GlusterFS. in a compute cluster.9 s 175. This is a result of two factors. peer to peer systems.3 s 349.0 s 508. dev. worker nodes with Condor Worker. and N file system nodes with a file system peer plugin. other applications. nodes for each cluster were provisioned in serial. Second.

Puppet [13]. modular plugins to define the systems typically assume a fixed architecture that consists of a behavior of a node.9. most well known example. among others. and elasticity. however. In the past many cluster management the VM image and cannot be defined by the user when the systems have been developed to enable system administrators application is deployed. These systems assume that the Recently. is not and configure software. and a Periodograms appliances for distributed applications. which are similar to Wrangler Configuring compute clusters is a well-known systems plugins with the exception that NCB roles must be installed in administration problem. our research plugin to install application binaries. Cfengine [4]. but each node in explored by several previous research efforts. CONCLUSION Globus. FutureGrid Sierra. virtual machines. Rocks [28] is perhaps the Nimbus-based clouds. In contrast. As such. Periodograms Kepler [21] is a NASA satellite that uses high-precision photometry to detect planets outside our solar system. and others [20.32. software on the worker nodes. In order to take advantage of cloud approach can be seen as complementary to these systems in . while NCB works best with clusters [3. as well as the emergence of maintaining a known. Analyzing these light curves to find new planets requires the calculation of periodograms. features of cloud computing. In order to manage these computations we developed a workflow using Figure 6: Deployment used to execute periodograms the Pegasus workflow management system [6]. This example illustrates how Wrangler can be used to set up experiments for distributed systems research. monitor running VMs. while Wrangler enables VMPlants [17]. We deployed this application across the Amazon EC2. Our respond to failures.29. and monitor interdependent services in the cloud. head node and N worker nodes. Our system is similar to the Nimbus Context Broker VI. In this deployment.d can have only one service. or VII. the three cloud sites execute workflow tasks. Generating periodograms for the hundreds of thousands of light curves that have been released by the Kepler mission is a computationally intensive job that demands high-throughput distributed computing. This is complementary to that of the virtual appliances community application successfully demonstrated Wrangler’s ability to as well.d Constructing clusters on top of virtual machines has been services are similar to Wrangler plugins. our system is designed to to easily install and maintain high-performance computing support multiple cloud providers. B. NCB supports roles. and NERSC Magellan clouds using the sense that one could easily create a Wrangler plugin that Wrangler. management and policy engines have been developed for There is still much work to be done in investigating the UNIX systems. The unique nodes and custom. a master node running outside deployment. and allow that system manage node the cloud manages the workflow. and Chef [27] are a best way to manage cloud environments. These include cloudinit. Our approach is similar to these infrastructure clouds support the deployment of isolated systems in that configuration is one of its primary concerns.23]. other groups are recognizing the need for cluster is deployed on physical machines that are owned and deployment services. but do not provide functionality to deploy however. user-defined plugins. The deployment configuration is illustrated in installs a configuration management system on the nodes in a Figure 6. Condor. These users to compose several. Configuration management deals with the problem of virtualization. and do not support virtual machines One example is cloudinit. workflows. consistent state across many hosts in a commercial cloud providers. controlled by the user. deploy complex applications across multiple cloud providers.34]. and are developing similar solutions. Many different configuration about deploying and executing distributed applications. The deployment This work is related to virtual appliances [30] in that we used several different plugins to set up and configure the are interested in deploying application services in the cloud. Existing few well-known examples. They also typically support only a single type of cluster software. or detect and addressed by configuration management systems.ready.d [2]. including a Condor Worker The focus of our project is on deploying collections of plugin to deploy and configure Condor. Cloudinit. Of these. which enables users to deploy provisioned from cloud providers. and worker nodes running in configuration. the other concern of this work. StarCluster [31]. are changing the way we think distributed environment. such as on-demand provisioning. such as SGE. which identify the periodic dimming caused by a planet as it orbits its star. provisioning. RELATED WORK (NCB) [14] used with the Nimbus cloud computing system [15]. In addition. our approach supports complex The rapidly-developing field of cloud computing offers application architectures consisting of many interdependent new opportunities for distributed applications. The Kepler mission periodically releases time-series datasets of star brightness called light curves.

and H. Good. and R. Administration. Berriman. Deelman. [9] Infiniscale. Zhang. Blythe. “System management framework and tools for Beowulf cluster. [16] K.opscode. 219-237. Sapuntzakis. http://www. M. D. no. Litzkow. “Elastic Site: Using Clouds to investigate solutions for automatically handling failures by re. and S.” 8th International Conference of Distributed Computing Systems.nasa. 8.A. “Experiences Using Cloud Computing for A Scientific Workflow [8] W.gov. 2006. A. 13. and T. LaBissoniere.org/. J.” Teragrid Conference. Concurrency and Computation: Practice and Experience.openpbs. Feb. “Easy and reliable cluster management: the self-management experience of Fire Phoenix. and M. 2000. http://www. “Managing 8. Rosenblum.S. “OSCAR: Open Source Cluster [30] C. vol. . “Data Sharing Options for Scientific Workflows on of services to support distributed applications. G. “Pegasus: A framework for mapping complex scientific workflows International Conference/Exhibition on High Performance Computing in onto distributed systems. Application Resources. E.opennebula. Angskun.J. Gentzsch. 2009. http://www. Vahi.” 17th USENIX Conference on System Systems. Juve. Deelman. Papadopoulos. Mutka. Kesselman. Kagey. “Sun Grid Engine: towards creating a compute power Application.C. and S. Maneesilp. Bresnahan. Keahey. K. T. Zhi-Hong. Appliance Launches in Infrastructure Clouds. K. 707-725.” 10th IEEE/ACM International provisioning failed nodes. “A site configuration engine. 1995. 2005.” 2004 ACM/IEEE conference on address in the future.com. REFERENCES [27] Opscode. no. demand. J. Deelman. [17] I. M. Mehta. Brim. C. the Asia-Pacific Region. In the future we plan to [19] P. [23] H. Kepler.M. Dan. “The Eucalyptus Open-source Cloud- makes use of resources supported in part by the NSF under computing System. T. Paisitbenchapol. Jun. 2006. techniques for easily deploying manageable Linux clusters. Berman.” Scientific Programming. 2003. Chandra. Zeldovich. pp.L. [32] P. and K. http://cs. and J. 13. M. Elastically Extend Site Resources.” 2010 ACM/IEEE conference on Supercomputing (SC applications over time. pp. T. and resources of the National Cluster Computing and the Grid (CCGrid 09).A. complex. Juve and E. G. and for International Symposium on Cluster Computing and the Grid (CCGrid dynamically scaling deployments in response to application 09). “Virtual Appliances for Deploying and [4] M. Figueiredo. We also plan to develop Provisioning of Virtual Organization Clusters.perceus. G. Livny. and G. Goasguen. Elastic Compute Cloud (EC2).” Ottowa Linux Symposium. 2011. vol.com/chef. but we “VMPlants: Providing and Managing Virtual Machine Execution have encountered some issues in using it that we plan to Environments for Grid Computing. Su. “Virtual Clusters on the ACKNOWLEGEMENTS Fly . Matsunaga.” Cloud Computing and Its Applications. Lam. ISI. coordinates the configuration and initiation P.nersc.B. Laity. provision virtual clusters for scientific workflow applications [14] K. “Contextualization: Providing One-Click on Amazon EC2. M. unattended for long periods. J. Obertelli. Computing (HPDC). E. M. Mehta. Kanies. new provisioning tools need to be developed to [10] G. Scyld ClusterWare. S. Freeman. and M. J.edu/condor/dagman. and virtual machines. Grzegorczyk. M. [33] J. In this paper we presented the design and implementation [11] G.” 1st IEEE/ACM International Symposium on Cluster Computing (ScienceCloud).G. Z.” Workshop on Cloud-based Services and applications on infrastructure clouds. Tsugawa. Katz. “Scientific Workflow of a system used for automatically deploying distributed Applications on Amazon EC2. 10). no. no. [28] P. Magellan. Deelman. can respond to failures manually. J. K. Deelman. “Puppet: Next Generation Configuration Management. and J. Fortes. [15] K. Maechling. E. Freeman. [21] NASA. Singh. http://kepler. and the Grid (CCGrid ’01). 1988.J.J.W. Fortes.com/ec2. http://futuregrid. 2010. 2009. and M. W.mit. 5. and Flexible Installation. 43-51. Berriman. and D. Scott. Juve. G. G.” Login. and D. Maruyama. the Magellan cloud at NERSC. Vahi. Tsugawa.gov/. Chef.-H.wisc. Murphy. Keahey. N. http://www.” http://aws. [25] OpenNebula. 2003. http://www.J. 2007. K. 2001. 2001. 19-25. 2009. Zagorodnov.” 7th IEEE International This work was sponsored by the National Science Symposium on Cluster Computing and the Grid (CCGrid 07). T.org. M.” 20th International Parallel and Distributed Processing Symposium (IPDPS 06).” 20th International Symposium on High Performance Distributed assist users with these tasks. Mattson. M. Lin-ping. vol.” Fourth Katz. In practice this has been a [18] M. Keahey.penguincomputing. B. [5] DAGMan.” USENIX Computing Maintaining Software.” 4th International Conference on e-Science (e-Science and India clouds on the FutureGrid.org. the Sierra Virtual Clusters. Juve. and monitors Amazon EC2. R.amazon. earth science. Krsul. 2008.” IEEE Internet Computing. A.” 9th IEEE/ACM techniques for re-configuring deployments.com/software/scyld_clusterware. Berriman. Mehta. [34] Z. “Condor: A Hunter of Idle problem because users often leave virtual clusters running Workstations. B.J. Uthayopas. and the Skynet cloud at 08). Keahey and T. Soman. Marshall. W.-S. and G. Jian-Feng. J. “Dynamic provisioning is not possible. J. pp. Perceus/Warewulf. L. “Sky So far we have found that Wrangler makes deploying Computing. http://magellan. no. 1.org/. We have used these virtual clusters to run several hundred “Science clouds: Early experiences in cloud computing for scientific workflows for applications in astronomy. 7- [2] J. [22] NERSC.P. Chow. [29] Penguin Computing. 15. Nurmi. R. D. Nishimura. Jacob.resources. Scalable. [24] D. distributed applications in the cloud easy. and G. Foundation (NSF) under award OCI-0943725. A. bioinformatics and applications. Freeman. [6] E. N. and S.B. Figueiredo. [26] OpenPBS. 2011.edu/stardev/cluster/. Currently. “Wrangler: Virtual Cluster Provisioning for the Cloud.” 2nd Workshop on Scientific Cloud Computing grid. Rynge.B. Wei. 2009. Matsuoka. Vockler. Energy Research Scientific Computing Center (Magellan). http://web. Wolski. Freeman. Y. [7] FutureGrid.S. to fail gracefully or provide degraded service when re- [20] M. pp. with several different cloud resource providers to provision [12] G. Ganguly. and by implementing mechanisms Symposium on Cluster. [31] StarCluster. 3. [3] M. “NPACI Rocks: tools and [1] Amazon. 31. Lei. Fortes. Fenn. R. Wrangler assumes that users Supercomputing (SC 04). Cloud and Grid Computing (CCGrid 2010). S.Fast. Gil. B. vol. 2011. vol. Brumley. G. 2010. 2004. Keahey. This research Youseff. 2008. Burgess. We have been using Wrangler since May 2010 to [13] L. 3. Vahi. Bruno.” 9th IEEE/ACM International Symposium on grant 091812 (FutureGrid). C. The system interfaces Applications in conjunction with 5th IEEE International Conference on e-Science (e-Science 2009).