Automating Application Deployment in

Infrastructure Clouds
Gideon Juve and Ewa Deelman
USC Information Sciences Institute
Marina del Rey, California, USA
{gideon,deelman}@isi.edu

Abstract—Cloud computing systems are becoming an important clouds, clusters and grids are static environments. A system
platform for distributed applications in science and engineering. administrator can setup the required services on a cluster and,
Infrastructure as a Service (IaaS) clouds provide the capability to with some maintenance, the cluster will be ready to run
provision virtual machines (VMs) on demand with a specific applications at any time. Clouds, on the other hand, are highly
configuration of hardware resources, but they do not provide
dynamic. Virtual machines provisioned from the cloud may be
functionality for managing resources once they are provisioned.
In order for such clouds to be used effectively, tools need to be used to run applications for only a few hours at a time. In
developed that can help users to deploy their applications in the order to make efficient use of such an environment, tools are
cloud. In this paper we describe a system we have developed to needed to automatically install, configure, and run distributed
provision, configure, and manage virtual machine deployments in services in a repeatable way.
the cloud. We also describe our experiences using the system to Deploying such applications is not a trivial task. It is
provision resources for scientific workflow applications, and usually not sufficient to simply develop a virtual machine
identify areas for further research. (VM) image that runs the appropriate services when the virtual
Keywords—cloud computing; provisioning; application machine starts up, and then just deploy the image on several
deployment VMs in the cloud. Often the configuration of distributed
services requires information about the nodes in the
I. INTRODUCTION deployment that is not available until after nodes are
Infrastructure as a Service (IaaS) clouds are becoming an provisioned (such as IP addresses, host names, etc.) as well as
important platform for distributed applications. These clouds parameters specified by the user. In addition, nodes often form
allow users to provision computational, storage and a complex hierarchy of interdependent services that must be
networking resources from commercial and academic resource configured in the correct order. Although users can manually
providers. Unlike other distributed resource sharing solutions, configure such complex deployments, doing so is time
such as grids, users of infrastructure clouds are given full consuming and error prone, especially for deployments with a
control of the entire software environment in which their large number of nodes. Instead, we advocate an approach
applications run. The benefits of this approach include support where the user is able to specify the layout of their application
for legacy applications and the ability to customize the declaratively, and use a service to automatically provision,
environment to suit the application. The drawbacks include configure, and monitor the application deployment. The
increased complexity and additional effort required to setup service should allow for the dynamic configuration of the
and deploy the application. deployment, so that a variety services can be deployed based
Current infrastructure clouds provide interfaces for on the needs of the user. It should also be resilient to failures
allocating individual virtual machines (VMs) with a desired that occur during the provisioning process and allow for the
configuration of CPU, memory, disk space, etc. However, dynamic addition and removal of nodes.
these interfaces typically do not provide any features to help In this paper we describe and evaluate a system called
users deploy and configure their application once resources Wrangler [10] that implements this functionality. Wrangler
have been provisioned. In order to make use of infrastructure allows users to send a simple XML description of the desired
clouds, developers need software tools that can be used to deployment to a web service that manages the provisioning of
configure dynamic execution environments in the cloud. virtual machines and the installation and configuration of
The execution environments required by distributed software and services. It is capable of interfacing with many
scientific applications, such as workflows and parallel different resource providers in order to deploy applications
programs, typically require a distributed storage system for across clouds, supports plugins that enable users to define
sharing data between application tasks running on different custom behaviors for their application, and allows
nodes, and a resource manager for scheduling tasks onto nodes dependencies to be specified between nodes. Complex
[12]. Fortunately, many such services have been developed for deployments can be created by composing several plugins that
use in traditional HPC environments, such as clusters and set up services, install and configure application software,
grids. The challenge is how to deploy these services in the download data, and monitor services, on several
cloud given the dynamic nature of cloud environments. Unlike interdependent nodes.

a Python API.23. and our experience using the Context • Multiple cloud providers.20.31]. In order to minimize errors and save time. Distributed systems often should exhibit other characteristics important to distributed consist of many services deployed across a collection systems. such as scalability. and enable nodes to advertise values collects information about the state of a deployment. must be configured in the correct order according to • Clients run on each user’s machine and send requests their dependencies. SYSTEM REQUIREMENTS [19]) to easily adapt deployments to the needs of an Based on our experience running science applications in application at runtime. node is added or removed. it may become necessary to Distributed applications used in science and provision resources from several cloud providers at engineering research often require resources for short the same time. Similarly. which can be expressed as a to the coordinator to launch. The components of the addresses. Some previous systems for deployments. and acts as an information broker to aid application • Dynamic provisioning. and clouds. but only a few nodes during the later next section we describe the requirements for a cloud stages. which support on-demand provisioning of should allow a single application to be deployed resources. databases. an e-commerce application may deployment service. that can be queried to configure dependent nodes. the cloud [11. or XML-RPC to architecture consisting of a head node and a collection interact with the coordinator. In Section IV we present an evaluation fewer web servers at night. In order to system are shown in Figure 1. They include: clients. a science application may require many worker nodes during the initial stages of a . the nodes and services coordinator. example. repeatable. the system • Complex dependencies. For about its deployments in an SQLite database. and agents. often require complex environments in which to run. In the event that a single Broker from the Nimbus cloud management system [15] we cloud provider is not able to supply sufficient have developed the following requirements for a deployment resources for an application.12. The coordinator stores information distributed applications often change over time. web servers. a deploy such an application. it is important to continuously deployed. It should also automatically provision. The remainder of this paper is organized as follows. A deployment service should make it easy deployment service should enable a user to describe for users to specify tests that can be used to verify that the nodes and services they require. of worker nodes [17. directed acyclic graph. to cloud computing or sky computing [16]. In addition to these functional requirements. but operation of Wrangler. or complete an experiment. query. distributed applications across multiple clouds. Section V presents two real user to add and remove nodes from a deployment at applications that were deployed in the cloud using Wrangler. A virtual application deployments. and then a node is functioning properly.33]. file systems.g. In Section III we explain the design and require more web servers during daylight hours. and configure the application automatically run these tests and notify the user when on-demand. and port numbers. Unfortunately. provisions nodes from cloud providers. II. or reliability concerns service: demand that an application is deployed across • Automatic deployment of distributed applications. and usability. deployment service should support multiple resource This makes them ideal candidates for infrastructure providers with different provisioning interfaces. These services include batch schedulers. reliability. host names. The resource requirements of configuration. A deployment service of the time required to deploy basic applications on several should support dynamic provisioning by enabling the different cloud systems. A problems. of hosts. • Monitoring. caches. dependencies. III. This process should be simple and errors occur. Clients have the option of using a constructing virtual clusters have assumed a fixed command-line tool. ARCHITECTURE AND IMPLEMENTATION Often. A analyze a large dataset. Long-running services may encounter Setting up these environments involves many steps problems that require user intervention. and terminate. This should be possible as long as the Sections VI and VII describe related work and conclude the deployment’s dependencies remain valid when the paper. the services in a distributed application depend We have developed a system called Wrangler to support on one another for configuration values. it monitor the state of a deployment in order to check for is important that these steps are automated. This capability could be used along with elastic provisioning algorithms (e. This capability is known as federated periods in order to complete a complex simulation. In the computation. and others. such as IP the requirements outlined above. independent data centers. In order to that must be repeated each time the application is detect these issues. This severely limits • The coordinator is a web service that manages the type of applications that can be deployed. It accepts requests from cluster provisioning system should support complex clients. runtime.

Adding additional and instance types are specified for the server and the clients. and each node may request and provisions the appropriate type and quantity of depend on zero or more other nodes or groups. resource provider to use for the node. to machines. and that no dependency cycles are passed to the script when it is executed on the node.sh”> <param name="SERVER"> <ref node="server" attribute="local-ipv4"> </param> <param name=”PATH”>/mnt</param> <param name=”MOUNT”>/nfs/data</param> </plugin> Figure 1: System architecture <depends node=”server”/> </node> </deployment> • Agents run on each of the provisioned nodes to manage their configuration and monitor their health. that all parameters. and dependencies can be resolved. and monitoring the “nfs_client. configure and monitor a node.sh” plugin. It checks that the request is valid.xlarge</instance-type> . the coordinator example describes a cluster of 4 nodes: 1 NFS server node. They are invoked by the agent to are part of a “clients” group. Each node has a provider that specifies the cloud termination.sh” plugin contains a <ref> tag. The clients are configured with an “nfs_client. Then it contacts the resource providers specified in the may be members of a named group. The clients behavior of a node. The “SERVER” parameter of the and services specified by the user. which functionalities that a cloud interface must provide are the starts the required NFS services and exports the /mnt . are The coordinator is designed to support many different specified as a single node with a “count” of three. The client sends a request to the coordinator that characteristics of the virtual machine to be provisioned— includes the XML descriptions of all the nodes to be launched.sh” hostnames). are to be provisioned from Amazon EC2. which are identical. providers is designed to be relatively simple. which enable the user to configure the plugin. The request can create a new type—as well as authentication credentials required by the deployment.. <deployment> <node name=”server”> <provider name=”amazon”>  <instance-type>c1.sh”> <param name="EXPORT">/mnt</param> </plugin> </node> <node name=”client” count=”3” group=”clients”> <provider name=”amazon”>  <instance-type>m1. configuring the node with the software /mnt directory as /nfs/data. which define the Provisioning. Upon receiving a request from a client.. including the VM image to use and the hardware resource as well as any plugins used. provider. The only The server is configured with an “nfs_server. In the event that network timeouts and other An example deployment is shown in Figure 2.small</instance-type> . </provider> <plugin script=”nfs_client. Deployment Process format. which starts NFS services and mounts the server’s coordinator.. All nodes cloud providers. services and functionality that should be coordinator first validates the request to ensure that there are implemented by the node. from the initial request. This parameter is node for failures.. and depend on the server node. Each XML request document describes a deployment Here we describe the process that Wrangler goes through consisting of several nodes. and OpenNebula [25]. replaced with the IP address of the server node at runtime and • Plugins are user-defined scripts that implement the used by the clients to mount the NFS file system. The transient errors occur during provisioning. the behaviors. Each node has one or more plugins. and defines the Request. Each node in a which ensures that the NFS file system exported by the server deployment can be configured with multiple plugins. reporting the state of the node to the plugin. Figure 2: Example request for 4 node virtual cluster The agent is responsible for collecting information with a shared NFS file system about the node (such as its IP addresses and directory. and 3 NFS client nodes. It currently supports Amazon EC2 [1]. Specifying Deployments configured. Users specify their deployment using a simple XML B. automatically retries the request. or add nodes to an existing deployment. Nodes exist. which correspond to virtual to deploy an application. virtual machines. Plugins can have multiple no errors. will be available for the clients to mount when they are A. </provider> <plugin script=”nfs_server. and different images Eucalyptus [24]. The clients.

agent to configure the node. The contextualization data includes: data plugins that download and install application data. and the node’s Plugins are the modular components of a deployment. For each plugin. If all the node’s dependencies have clusters. which is not a simple task for Once the plugins are stopped. include the agent software. C. and reuse custom downloads and invokes the associated plugin script with the plugins. stop. The advantage of this approach more nodes. the agents report their status to many users and makes it more difficult to use off-the-shelf the coordinator. the coordinator enables the coordinator to manage a larger set of nodes. Commands are specific actions that must be performed by the then the coordinator attempts to configure them as well. a message is sent to the coordinator with updated deployed across many clouds. commands that tell the plugin what to do: start. ID assigned to the node by the coordinator. nodes that depend on the newly configured node. There are and to collect attributes about the node and its environment. and the ability to pass Monitoring. It plugin to implement the plugin lifecycle. many different types of plugins that can be created. PBS [26]. NFS server and NFS ‘registered’. They are transferred from the client (or potentially a After the agent receives a command from the coordinator repository) to the coordinator when a node is provisioned. Plugins When the agent starts. If all plugins were components: parameters and commands. The agent passes makes sure that they have registered. then the Plugins are implemented as simple scripts that run on the coordinator waits until all dependencies have been configured nodes to perform all of the actions required by the application. They are specified in the XML request Upon receiving a message that the node has been document described above. and the host and port where the coordinator can be contacted. After checking all the connectivity between nodes so that an application can be plugins. such as Condor [18]. the monitoring plugins that validate the state of the node. user-specified parameters. Python. the list of plugins for the node. The agent passes parameters to the configured. behavior of the plugin. It is invoked when the . Once the agent has retrieved this Several plugins can be combined to define the behavior of a information. periodically monitors the node by invoking all the node’s The system does not assume anything about the network plugins with the status command. to deploy many different types of compute has any dependencies. starts the agent process. If the plugin fails with a non-zero exit code. If there are. • The start command tells the plugin to perform the behavior requested by the user. then coordinator can communicate with agents and vice versa. but can be any executable program that conforms to the then the agent aborts the configuration process and reports the required interface. After a node has been configured.ability to launch and terminate VMs. the agent custom contextualization data to a VM. security credentials. and the node’s status is set to different applications. When the user is ready to terminate one or pre-installed in the VM image. The sends messages to the agents on all nodes to be terminated. Upon receiving this request. disadvantage is that it requires users to re-bundle images to and the agents send stop commands to all of their plugins. For example. In the future we plan to investigate ways to install the provider to terminate the node(s). relevant information available from the metadata service. The is that it offloads the majority of the configuration and request can specify a single node. then the agent reports the node’s status as configuration variables that can be used to customize the ‘configured’ to the coordinator. Parameters are the successfully started. At that point. which deployment. such configuration plugins that apply application-specific settings. agent at runtime to avoid this issue. This requires the agent software to be Termination. they send a request to the coordinator. and to configure the node. and well-designed plugins can be reused for many registration message. it uses a provider-specific adapter Plugins are user-defined scripts that implement the to retrieve contextualization data passed by the coordinator. resolving any <ref> parameters that Plugins are typically shell. and The configuration process is complete when all agents status. the error messages are sent to the coordinator and the node’s Startup and Registration. at which point the user must between the agent and the plugin. Perl. This interface defines the interactions failure to the coordinator. the agent This enables users to easily define. or Sun Grid registration message from a node it checks to see if the node Engine [8]. as the availability zone. the coordinator sends a request to the most useful plugins. report to the coordinator that they are configured. before proceeding. it contacts the coordinator to retrieve from the coordinator to the agent when a node is configured. If they have not. several nodes. or Ruby scripts. If any of the plugins report errors. and the coordinator contacts the cloud images. When the coordinator receives a schedulers. We envision that there could be a repository for the already been configured. such as The attributes collected include: the public and private service plugins that start daemon processes. When the VM boots up. as well as any other plugins that install software used by the application. the node is ready to be configured. modify. it is sent to the coordinator as part of a node. The only requirement is that the attributes for the node. application-specific behaviors required of a node. There are three have been configured. application hostnames and IP addresses of the node. or an entire monitoring tasks from the coordinator to the agent. the coordinator checks to see if there are any plugin as environment variables when the plugin is invoked. and involves two intervene to correct the problem. it status is set to ‘failed’. and that all dependencies commands to the plugin as arguments. may be present. client plugins can be combined with plugins for different batch Configuration.

Dependencies are . start the VMs. configuration. command. between two nodes. such as parallel file systems and distributed echo "CONDOR_HOST not specified" caches. for E. and checks to make sure that the CentOS 5. Authentication of clients is of the node or long-running services need to accomplished using a username and password. We conducted experiments on three separate clouds: A basic plugin for Condor worker nodes is shown in Amazon EC2. This experiment correct order so that services and attributes published by one measures the time required to provision N nodes from a single node can be used by another node. or applications using Wrangler. then Second. and breaks the deadlock that would otherwise • The status command tells the plugin to check the state occur with a cyclic dependency. A. then for services such as Memcached clusters where the clients kill –QUIT $(cat $PIDFILE) need to know the addresses of each of the Memcached nodes. Co-dependent groups enable a limited form of cyclic dependencies and are useful for deploying some peer-to-peer Figure 3: Example plugin used for Condor workers. First.large instance type.#!/bin/bash -e valid as long as they do not form a cycle that would prevent PIDFILE=/var/run/condor/master. This plugin generates a configuration file and starts FutureGrid’s Sierra cloud [7]. These types of groups are useful elif [ “$1” == “stop” ]. Groups are exit 1 fi used for two purposes. These attributes are merged with the The performance of Wrangler is primarily a function of the node’s existing attributes and can be queried by other nodes in time it takes for the underlying cloud management system to the virtual cluster using <ref> tags or a command-line tool. This authentication mechanism then the node’s status is set to failed. a node can depend several nodes echo > /etc/condor/condor_config. Nodes in a co- • The stop command tells the plugin to stop any running dependent group are not configured until all members of the services and clean up. an NFS server node can advertise the address of time for nodes to register and be configured in the correct and path of an exported file system that NFS client nodes can order. then collective service. while Magellan and Sierra both use the command. of the node for errors. Only plugins that must the nodes that are collected during registration. This ensures that the basic attributes of before the node is terminated. Authentication implement this command. With that in mind. This command can be used. on all condor_master process is running when it receives the status three clouds. are available to all group members during command. Upon failure. we conducted a few basic use to mount the file system. each VM during provisioning. such as IP be shut down gracefully need to implement this addresses.5 VM images. EC2 uses a proprietary cloud the condor_master process when it receives the start management system. coordinator for each node. When a dependency exists provider. node is being configured. The plugin can advertise node attributes by writing key=value pairs to a file specified by the agent in an IV. of the nodes in the group have been configured. groups that depend on themselves form co-dependent kill -0 $(cat $PIDFILE) fi groups. This command is invoked group have registered. the dependent node will not be configured until the other node has been configured. We used identical the stop command. and for all nodes to register with the coordinator. Wrangler adds to this a relatively small amount For example. to query and respond to attributes updated by other nodes. systems and parallel file systems that require each node implementing the service to be aware of all the others. This is CONDOR_HOST = $CONDOR_HOST simpler than specifying dependencies between the node and END $SBIN/condor_master –pidfile $PIDFILE each member of the group. the output assumes that the cloud provider’s provisioning service of the plugin is collected and sent to the coordinator to provides the capability to securely transmit the agent’s key to simplify debugging and error diagnosis. Dependencies and Groups The first experiment we performed was provisioning a Dependencies ensure that nodes are configured in the simple vanilla cluster with no plugins. and the m1.local <<END at once by specifying that it depends on the group. NERSC’s Magellan cloud [22]. The status command can be used experiments to determine the overhead of deploying to periodically update the attributes advertised by the node. to verify that a service started by the plugin Wrangler uses SSL for secure communications between all is running.pid the application from being deployed. SBIN=/usr/local/condor/sbin if [ “$1” == “start” ]. All plugins should Nodes that depend on a group are not configured until all implement this command. Security example. EVALUATION environment variable. elif [ “$1” == “status” ]. can be configured using named groups. then Applications that deploy sets of nodes to perform a if [ "$CONDOR_HOST" == "" ]. of agents is done using a random key that is generated by the If at any time the plugin exits with a non-zero exit code. Only plugins that need to monitor the state components of the system. and Figure 3. kills the condor_master process when it receives Eucalyptus cloud management system [24]. Deployment with no plugins D.

6 s 206.8 s Sierra 447. GlusterFS. Note that we were not able to collect A.2) on Magellan. 88. Condor different configurations using three different applications and [18]. such .2 s 98.9 s 175. and V. This is a result of two factors.5 s FAIL The results of this experiment are shown in Table I. For larger clusters we observe that the provisioning time is up to twice the maximum observed for one VM.0 s 455.9 sec (std.3 s 349.7 sec (std. 2 4 8 16 Nodes Nodes Nodes Nodes Figure 5: Deployment used for workflow applications.7 s Magellan 101. network and services caused by the larger number of simultaneous requests.0 s 508. but this time we add plugins for the Pegasus systems (Amazon S3. dev. Table I: Mean provisioning time for a simple deployment with no plugins. in a compute cluster. Amazon 101.5 s 112. and waiting for all the three tiers: a master node using a Condor Master plugin. In the future we plan to investigate ways to provision VMs in parallel to reduce this overhead. which we measured to be 55. file system client. 4.1 s 131. By plugins in different combinations to complete the study. 2 4 8 16 Nodes Nodes Nodes Nodes Amazon 55. The deployment consists of a master node that of the configurations. Recently we conducted a study [12] that B. nodes for each cluster were provisioned in serial. EXAMPLE APPLICATIONS application-specific plugins. This study In the next experiment we again launch a deployment required us to deploy workflows using four parallel storage using Wrangler. First.6 s 102. on Magellan and Sierra there were Figure 4: Deployment used in the data storage study. and 428. and distributed databases. and the complexity cloud [12]. NFS. we can see it takes on the The deployments used in the study were similar to the one order of 1-2 minutes for Wrangler to run all the plugins once shown in Figure 4. possibly due to the increased load on the provider’s as web applications. The majority of this time is spent binaries on each worker node. and NFS to create an environment that is similar to what four cluster sizes—a total of 72 different combinations. it was not possible to deploy the manages the workflow and stores data. 104.7 s 500. Due to we have used for executing real workflow applications in the the large number of experiments required.9 s 112.8) on EC2. Using Wrangler we were able to that execute workflow tasks as shown in Figure 5. peer to peer systems. This deployment sets up a Condor pool the nodes have registered. The deployment consists of downloading and installing software. repeatable deployments by composing The results of this experiment are shown in Table II. Second.5 s Magellan 173.4 sec (std. depending on the target cloud and with a shared GlusterFS file system and installs application the number of nodes.9 s FAIL Table II: Provisioning time for a deployment used for workflow applications. Deployment for workflow applications evaluated several different storage configurations that can be used to share data for workflows on Amazon EC2. several outlier VMs that took much longer than expected to start. N NFS clients to successfully mount the shared file system. Data Storage Study data for Sierra with 16 nodes because the failure rate on Sierra Many workflow applications require shared storage while running these experiments was about 8%.5 s 433. 10. worker nodes with Condor Worker. The file system nodes form a group so In this section we describe our experience using Wrangler that worker nodes will be configured after the file system is to deploy scientific workflow applications. In most cases we observe that the provisioning time for a virtual cluster is comparable to the time required to provision one VM. dev. DAGMan [5]. other applications. which added 1-2 seconds onto the total provisioning time for each node. dev. and PVFS) in six workflow management system [6].6 s 69. and N worker nodes environments manually. comparing Table I and Table II. could be deployed as easily.8 s 55. and N file system nodes with a file system peer plugin.3 s Sierra 371.1 s 185.2 s 111. Although these applications are scientific workflows. create automatic. which systems in order to communicate data products among nodes virtually guaranteed that at least 1 out of every 16 VMs failed.1) on Sierra.

Puppet [13]. Our approach is similar to these infrastructure clouds support the deployment of isolated systems in that configuration is one of its primary concerns. a master node running outside deployment. our research plugin to install application binaries. the three cloud sites execute workflow tasks. controlled by the user. Configuration management deals with the problem of virtualization. Our respond to failures. are changing the way we think distributed environment. In order to take advantage of cloud approach can be seen as complementary to these systems in . Our system is similar to the Nimbus Context Broker VI. These systems assume that the Recently. Cfengine [4].d Constructing clusters on top of virtual machines has been services are similar to Wrangler plugins. software on the worker nodes. and are developing similar solutions. Existing few well-known examples. the other concern of this work. deploy complex applications across multiple cloud providers. or detect and addressed by configuration management systems. virtual machines. In order to manage these computations we developed a workflow using Figure 6: Deployment used to execute periodograms the Pegasus workflow management system [6]. such as SGE. and worker nodes running in configuration. most well known example. while NCB works best with clusters [3. Generating periodograms for the hundreds of thousands of light curves that have been released by the Kepler mission is a computationally intensive job that demands high-throughput distributed computing. The deployment configuration is illustrated in installs a configuration management system on the nodes in a Figure 6. which are similar to Wrangler Configuring compute clusters is a well-known systems plugins with the exception that NCB roles must be installed in administration problem. The unique nodes and custom. other groups are recognizing the need for cluster is deployed on physical machines that are owned and deployment services. FutureGrid Sierra. such as on-demand provisioning. In addition. monitor running VMs. Cloudinit.29. but do not provide functionality to deploy however. In contrast. while Wrangler enables VMPlants [17]. as well as the emergence of maintaining a known.23]. Many different configuration about deploying and executing distributed applications. and elasticity. and monitor interdependent services in the cloud. and others [20. In the past many cluster management the VM image and cannot be defined by the user when the systems have been developed to enable system administrators application is deployed. which enables users to deploy provisioned from cloud providers. This is complementary to that of the virtual appliances community application successfully demonstrated Wrangler’s ability to as well. Periodograms Kepler [21] is a NASA satellite that uses high-precision photometry to detect planets outside our solar system. These users to compose several. provisioning. NCB supports roles. Condor. our approach supports complex The rapidly-developing field of cloud computing offers application architectures consisting of many interdependent new opportunities for distributed applications. and allow that system manage node the cloud manages the workflow. features of cloud computing. Analyzing these light curves to find new planets requires the calculation of periodograms. but each node in explored by several previous research efforts.d can have only one service. which identify the periodic dimming caused by a planet as it orbits its star. and Chef [27] are a best way to manage cloud environments.d [2]. our system is designed to to easily install and maintain high-performance computing support multiple cloud providers. among others. StarCluster [31]. As such. however.ready. These include cloudinit. workflows. including a Condor Worker The focus of our project is on deploying collections of plugin to deploy and configure Condor.32. B. is not and configure software. or VII. head node and N worker nodes. This example illustrates how Wrangler can be used to set up experiments for distributed systems research. and do not support virtual machines One example is cloudinit. and NERSC Magellan clouds using the sense that one could easily create a Wrangler plugin that Wrangler. management and policy engines have been developed for There is still much work to be done in investigating the UNIX systems.34].9. Rocks [28] is perhaps the Nimbus-based clouds. They also typically support only a single type of cluster software. RELATED WORK (NCB) [14] used with the Nimbus cloud computing system [15]. We deployed this application across the Amazon EC2. and a Periodograms appliances for distributed applications. consistent state across many hosts in a commercial cloud providers. modular plugins to define the systems typically assume a fixed architecture that consists of a behavior of a node. The Kepler mission periodically releases time-series datasets of star brightness called light curves. CONCLUSION Globus. user-defined plugins. The deployment This work is related to virtual appliances [30] in that we used several different plugins to set up and configure the are interested in deploying application services in the cloud. Of these. In this deployment.

G.” 1st IEEE/ACM International Symposium on Cluster Computing (ScienceCloud). Keahey and T. Maneesilp. C. Lam. Freeman. M. 2005. the Magellan cloud at NERSC. 1988. J.J. M. Kepler. “Experiences Using Cloud Computing for A Scientific Workflow [8] W. D. Berriman. coordinates the configuration and initiation P. “The Eucalyptus Open-source Cloud- makes use of resources supported in part by the NSF under computing System. [17] I. vol. with several different cloud resource providers to provision [12] G. L. and virtual machines. unattended for long periods. “Condor: A Hunter of Idle problem because users often leave virtual clusters running Workstations. the Asia-Pacific Region. 3. and T. G.S. Administration.com/ec2.amazon. Mehta.” Scientific Programming. G.resources. Juve. “Virtual Clusters on the ACKNOWLEGEMENTS Fly . Jun.-S. Kanies. http://magellan.J. Fortes. “Managing 8. E. Laity. http://www. [31] StarCluster.J. R. T. ISI.” 4th International Conference on e-Science (e-Science and India clouds on the FutureGrid. and J. [33] J. 2008. Dan. Elastically Extend Site Resources. and M. no. http://kepler. K.” Cloud Computing and Its Applications. [7] FutureGrid. Computing (HPDC). N. “Sky So far we have found that Wrangler makes deploying Computing. and the Grid (CCGrid ’01). pp. Mutka.J. http://www.” USENIX Computing Maintaining Software. 2001. LaBissoniere. Mattson. 2009.opscode. and R. Brumley. [24] D. 2001. Fenn. Singh. and G. Vahi. Obertelli. Keahey. Soman.openpbs. 2008.com/software/scyld_clusterware. Chef. and S. Berriman. Jacob. Scalable. vol. Angskun. [6] E. K. Chandra. Mehta.A. “NPACI Rocks: tools and [1] Amazon. Paisitbenchapol. 2009. Krsul. 19-25. “Puppet: Next Generation Configuration Management.Fast. J. “Elastic Site: Using Clouds to investigate solutions for automatically handling failures by re. Wolski. Su. Papadopoulos. Katz. J. 10). Rynge. W.” 17th USENIX Conference on System Systems. Blythe. and J. 2011. M.” 2nd Workshop on Scientific Cloud Computing grid. Gentzsch. Deelman. 2010. 43-51.” 2010 ACM/IEEE conference on Supercomputing (SC applications over time. 2010. [16] K. 2004. J. Sapuntzakis. Figueiredo. Tsugawa. N. 3. 219-237. Deelman. Freeman. G. R. Burgess.” 8th International Conference of Distributed Computing Systems. 2003. Goasguen. provision virtual clusters for scientific workflow applications [14] K. “A site configuration engine. Z. Zhi-Hong. Litzkow.” Fourth Katz. Matsuoka. “Easy and reliable cluster management: the self-management experience of Fire Phoenix. In practice this has been a [18] M. pp. “Dynamic provisioning is not possible. and G.B. Ganguly. and M. D. T. and monitors Amazon EC2.L. M. 2000. S. Vahi. http://www. “Scientific Workflow of a system used for automatically deploying distributed Applications on Amazon EC2. Keahey. B.” 7th IEEE International This work was sponsored by the National Science Symposium on Cluster Computing and the Grid (CCGrid 07). [22] NERSC.” 2004 ACM/IEEE conference on address in the future. “Wrangler: Virtual Cluster Provisioning for the Cloud. Elastic Compute Cloud (EC2). Vockler. 7- [2] J. http://www.” 9th IEEE/ACM International Symposium on grant 091812 (FutureGrid). Bruno. Appliance Launches in Infrastructure Clouds. pp.C. 13. Maruyama. “Sun Grid Engine: towards creating a compute power Application. demand. vol.B.org/. the Sierra Virtual Clusters.gov. Fortes.” 20th International Parallel and Distributed Processing Symposium (IPDPS 06).org. complex.” 20th International Symposium on High Performance Distributed assist users with these tasks. Keahey. and M.edu/condor/dagman. distributed applications in the cloud easy. J. http://cs. new provisioning tools need to be developed to [10] G. Nishimura. The system interfaces Applications in conjunction with 5th IEEE International Conference on e-Science (e-Science 2009).S. 8. 31. http://futuregrid. Murphy. . earth science.penguincomputing. [3] M.edu/stardev/cluster/. 2003. M. E. R.org/. Juve and E. Matsunaga. [28] P. K. Zagorodnov. Rosenblum. 2006. Deelman. pp. This research Youseff.W. [25] OpenNebula. Bresnahan. Kesselman. Energy Research Scientific Computing Center (Magellan). Cloud and Grid Computing (CCGrid 2010). 2007. Freeman.G. no. Juve. B. Wei. and the Skynet cloud at 08).” Login. A. 15.A.perceus. Keahey. Livny. A. Uthayopas.J. G. We also plan to develop Provisioning of Virtual Organization Clusters. [29] Penguin Computing. [5] DAGMan. Chow. can respond to failures manually. Kagey. J. We have been using Wrangler since May 2010 to [13] L. 13. [26] OpenPBS.com. T.” IEEE Internet Computing. Zeldovich. 2011. Application Resources. Brim. and S. Grzegorczyk.com/chef.” Ottowa Linux Symposium. A. Marshall. “Data Sharing Options for Scientific Workflows on of services to support distributed applications. Maechling. 5. REFERENCES [27] Opscode. techniques for easily deploying manageable Linux clusters. [34] Z. but we “VMPlants: Providing and Managing Virtual Machine Execution have encountered some issues in using it that we plan to Environments for Grid Computing. 2006. and resources of the National Cluster Computing and the Grid (CCGrid 09). “Pegasus: A framework for mapping complex scientific workflows International Conference/Exhibition on High Performance Computing in onto distributed systems. In the future we plan to [19] P. Scyld ClusterWare. and D. In this paper we presented the design and implementation [11] G. K.” http://aws. [32] P. [23] H.gov/. Y. 2011. Tsugawa. vol. and by implementing mechanisms Symposium on Cluster. M. http://www. [9] Infiniscale. Nurmi. Magellan. and Flexible Installation.nasa. G. vol.” Workshop on Cloud-based Services and applications on infrastructure clouds. [15] K.nersc. Juve. 2009.” Teragrid Conference. to fail gracefully or provide degraded service when re- [20] M. Berman. no. Freeman. “OSCAR: Open Source Cluster [30] C. Good. no. Lin-ping. Deelman. Figueiredo. http://web. bioinformatics and applications. no. B.” 9th IEEE/ACM techniques for re-configuring deployments. E. W. Fortes. M. Perceus/Warewulf. and S. and D.mit. “Contextualization: Providing One-Click on Amazon EC2.M. Foundation (NSF) under award OCI-0943725. Zhang. Vahi.B. We have used these virtual clusters to run several hundred “Science clouds: Early experiences in cloud computing for scientific workflows for applications in astronomy. C.” 10th IEEE/ACM International provisioning failed nodes.wisc. Deelman. “System management framework and tools for Beowulf cluster. Scott. J. and K. Concurrency and Computation: Practice and Experience.-H. Lei. Jian-Feng. S. Wrangler assumes that users Supercomputing (SC 04). T.opennebula. Berriman.org. 1995. 1. 2009.P. and G. Currently. Gil. M. “Virtual Appliances for Deploying and [4] M. and for International Symposium on Cluster Computing and the Grid (CCGrid dynamically scaling deployments in response to application 09). Mehta. 707-725. Feb. and H. [21] NASA.