Automating Application Deployment in

Infrastructure Clouds
Gideon Juve and Ewa Deelman
USC Information Sciences Institute
Marina del Rey, California, USA
{gideon,deelman}@isi.edu

Abstract—Cloud computing systems are becoming an important clouds, clusters and grids are static environments. A system
platform for distributed applications in science and engineering. administrator can setup the required services on a cluster and,
Infrastructure as a Service (IaaS) clouds provide the capability to with some maintenance, the cluster will be ready to run
provision virtual machines (VMs) on demand with a specific applications at any time. Clouds, on the other hand, are highly
configuration of hardware resources, but they do not provide
dynamic. Virtual machines provisioned from the cloud may be
functionality for managing resources once they are provisioned.
In order for such clouds to be used effectively, tools need to be used to run applications for only a few hours at a time. In
developed that can help users to deploy their applications in the order to make efficient use of such an environment, tools are
cloud. In this paper we describe a system we have developed to needed to automatically install, configure, and run distributed
provision, configure, and manage virtual machine deployments in services in a repeatable way.
the cloud. We also describe our experiences using the system to Deploying such applications is not a trivial task. It is
provision resources for scientific workflow applications, and usually not sufficient to simply develop a virtual machine
identify areas for further research. (VM) image that runs the appropriate services when the virtual
Keywords—cloud computing; provisioning; application machine starts up, and then just deploy the image on several
deployment VMs in the cloud. Often the configuration of distributed
services requires information about the nodes in the
I. INTRODUCTION deployment that is not available until after nodes are
Infrastructure as a Service (IaaS) clouds are becoming an provisioned (such as IP addresses, host names, etc.) as well as
important platform for distributed applications. These clouds parameters specified by the user. In addition, nodes often form
allow users to provision computational, storage and a complex hierarchy of interdependent services that must be
networking resources from commercial and academic resource configured in the correct order. Although users can manually
providers. Unlike other distributed resource sharing solutions, configure such complex deployments, doing so is time
such as grids, users of infrastructure clouds are given full consuming and error prone, especially for deployments with a
control of the entire software environment in which their large number of nodes. Instead, we advocate an approach
applications run. The benefits of this approach include support where the user is able to specify the layout of their application
for legacy applications and the ability to customize the declaratively, and use a service to automatically provision,
environment to suit the application. The drawbacks include configure, and monitor the application deployment. The
increased complexity and additional effort required to setup service should allow for the dynamic configuration of the
and deploy the application. deployment, so that a variety services can be deployed based
Current infrastructure clouds provide interfaces for on the needs of the user. It should also be resilient to failures
allocating individual virtual machines (VMs) with a desired that occur during the provisioning process and allow for the
configuration of CPU, memory, disk space, etc. However, dynamic addition and removal of nodes.
these interfaces typically do not provide any features to help In this paper we describe and evaluate a system called
users deploy and configure their application once resources Wrangler [10] that implements this functionality. Wrangler
have been provisioned. In order to make use of infrastructure allows users to send a simple XML description of the desired
clouds, developers need software tools that can be used to deployment to a web service that manages the provisioning of
configure dynamic execution environments in the cloud. virtual machines and the installation and configuration of
The execution environments required by distributed software and services. It is capable of interfacing with many
scientific applications, such as workflows and parallel different resource providers in order to deploy applications
programs, typically require a distributed storage system for across clouds, supports plugins that enable users to define
sharing data between application tasks running on different custom behaviors for their application, and allows
nodes, and a resource manager for scheduling tasks onto nodes dependencies to be specified between nodes. Complex
[12]. Fortunately, many such services have been developed for deployments can be created by composing several plugins that
use in traditional HPC environments, such as clusters and set up services, install and configure application software,
grids. The challenge is how to deploy these services in the download data, and monitor services, on several
cloud given the dynamic nature of cloud environments. Unlike interdependent nodes.

A deployment service of the time required to deploy basic applications on several should support dynamic provisioning by enabling the different cloud systems. In addition to these functional requirements. This capability is known as federated periods in order to complete a complex simulation. and others. The resource requirements of configuration.g. Unfortunately. and usability. and acts as an information broker to aid application • Dynamic provisioning. must be configured in the correct order according to • Clients run on each user’s machine and send requests their dependencies. example. provisions nodes from cloud providers. and agents. repeatable. In order to that must be repeated each time the application is detect these issues. often require complex environments in which to run. SYSTEM REQUIREMENTS [19]) to easily adapt deployments to the needs of an Based on our experience running science applications in application at runtime. the system • Complex dependencies. reliability.31]. the services in a distributed application depend We have developed a system called Wrangler to support on one another for configuration values. A problems. and terminate. such as scalability. an e-commerce application may deployment service. that can be queried to configure dependent nodes. A virtual application deployments. of hosts. deployment service should support multiple resource This makes them ideal candidates for infrastructure providers with different provisioning interfaces. and then a node is functioning properly. query. These services include batch schedulers. Clients have the option of using a constructing virtual clusters have assumed a fixed command-line tool. directed acyclic graph. This capability could be used along with elastic provisioning algorithms (e. caches. III. It accepts requests from cluster provisioning system should support complex clients. and our experience using the Context • Multiple cloud providers. This process should be simple and errors occur. or XML-RPC to architecture consisting of a head node and a collection interact with the coordinator. to cloud computing or sky computing [16]. • Monitoring. a science application may require many worker nodes during the initial stages of a . which support on-demand provisioning of should allow a single application to be deployed resources. and enable nodes to advertise values collects information about the state of a deployment. runtime. Similarly. ARCHITECTURE AND IMPLEMENTATION Often.23. The remainder of this paper is organized as follows. it is important to continuously deployed. This should be possible as long as the Sections VI and VII describe related work and conclude the deployment’s dependencies remain valid when the paper. or reliability concerns service: demand that an application is deployed across • Automatic deployment of distributed applications. the nodes and services coordinator. In the computation. a deploy such an application.33]. In order to system are shown in Figure 1. In the event that a single Broker from the Nimbus cloud management system [15] we cloud provider is not able to supply sufficient have developed the following requirements for a deployment resources for an application. file systems. such as IP the requirements outlined above. II. it may become necessary to Distributed applications used in science and provision resources from several cloud providers at engineering research often require resources for short the same time. Long-running services may encounter Setting up these environments involves many steps problems that require user intervention. In Section IV we present an evaluation fewer web servers at night. a Python API. databases. and configure the application automatically run these tests and notify the user when on-demand. or complete an experiment. the cloud [11.20. independent data centers. A analyze a large dataset. In Section III we explain the design and require more web servers during daylight hours. but operation of Wrangler. It should also automatically provision. web servers. The coordinator stores information distributed applications often change over time. This severely limits • The coordinator is a web service that manages the type of applications that can be deployed. node is added or removed. it monitor the state of a deployment in order to check for is important that these steps are automated. which can be expressed as a to the coordinator to launch. Section V presents two real user to add and remove nodes from a deployment at applications that were deployed in the cloud using Wrangler. but only a few nodes during the later next section we describe the requirements for a cloud stages. They include: clients. For about its deployments in an SQLite database. The components of the addresses. and clouds. dependencies. Some previous systems for deployments.12. and port numbers. host names. In order to minimize errors and save time. of worker nodes [17. Distributed systems often should exhibit other characteristics important to distributed consist of many services deployed across a collection systems. A deployment service should make it easy deployment service should enable a user to describe for users to specify tests that can be used to verify that the nodes and services they require. distributed applications across multiple clouds.

Each node has one or more plugins. and 3 NFS client nodes.. Each XML request document describes a deployment Here we describe the process that Wrangler goes through consisting of several nodes. services and functionality that should be coordinator first validates the request to ensure that there are implemented by the node. which starts NFS services and mounts the server’s coordinator. which define the Provisioning. configuring the node with the software /mnt directory as /nfs/data. which correspond to virtual to deploy an application. The only The server is configured with an “nfs_server. from the initial request. Plugins can have multiple no errors. They are invoked by the agent to are part of a “clients” group. resource provider to use for the node. Users specify their deployment using a simple XML B. The transient errors occur during provisioning. and depend on the server node. replaced with the IP address of the server node at runtime and • Plugins are user-defined scripts that implement the used by the clients to mount the NFS file system. including the VM image to use and the hardware resource as well as any plugins used. The “SERVER” parameter of the and services specified by the user. In the event that network timeouts and other An example deployment is shown in Figure 2. and dependencies can be resolved. virtual machines. All nodes cloud providers. and OpenNebula [25]. which are identical. will be available for the clients to mount when they are A. and that no dependency cycles are passed to the script when it is executed on the node. are to be provisioned from Amazon EC2. the coordinator example describes a cluster of 4 nodes: 1 NFS server node.sh” plugin. that all parameters. This parameter is node for failures. or add nodes to an existing deployment.sh” plugin contains a <ref> tag. Upon receiving a request from a client. providers is designed to be relatively simple.small</instance-type> . and each node may request and provisions the appropriate type and quantity of depend on zero or more other nodes or groups. It currently supports Amazon EC2 [1].. Nodes exist.sh”> <param name="EXPORT">/mnt</param> </plugin> </node> <node name=”client” count=”3” group=”clients”> <provider name=”amazon”>  <instance-type>m1. Deployment Process format. provider. </provider> <plugin script=”nfs_server. The clients behavior of a node. configure and monitor a node. The request can create a new type—as well as authentication credentials required by the deployment. Specifying Deployments configured. the behaviors. Figure 2: Example request for 4 node virtual cluster The agent is responsible for collecting information with a shared NFS file system about the node (such as its IP addresses and directory. Then it contacts the resource providers specified in the may be members of a named group. automatically retries the request. reporting the state of the node to the plugin. </provider> <plugin script=”nfs_client.sh” hostnames).sh”> <param name="SERVER"> <ref node="server" attribute="local-ipv4"> </param> <param name=”PATH”>/mnt</param> <param name=”MOUNT”>/nfs/data</param> </plugin> Figure 1: System architecture <depends node=”server”/> </node> </deployment> • Agents run on each of the provisioned nodes to manage their configuration and monitor their health. and different images Eucalyptus [24]. and monitoring the “nfs_client. which enable the user to configure the plugin. Each node in a which ensures that the NFS file system exported by the server deployment can be configured with multiple plugins. Adding additional and instance types are specified for the server and the clients.. Each node has a provider that specifies the cloud termination. <deployment> <node name=”server”> <provider name=”amazon”>  <instance-type>c1. to machines.. The clients are configured with an “nfs_client. It checks that the request is valid. are The coordinator is designed to support many different specified as a single node with a “count” of three. The client sends a request to the coordinator that characteristics of the virtual machine to be provisioned— includes the XML descriptions of all the nodes to be launched.xlarge</instance-type> . The clients. and defines the Request. which functionalities that a cloud interface must provide are the starts the required NFS services and exports the /mnt .

the agents report their status to many users and makes it more difficult to use off-the-shelf the coordinator. a message is sent to the coordinator with updated deployed across many clouds. and the coordinator contacts the cloud images. Parameters are the successfully started. the agent This enables users to easily define. the coordinator sends a request to the most useful plugins. This interface defines the interactions failure to the coordinator. and The configuration process is complete when all agents status. which is not a simple task for Once the plugins are stopped. The contextualization data includes: data plugins that download and install application data. relevant information available from the metadata service. the error messages are sent to the coordinator and the node’s Startup and Registration. and well-designed plugins can be reused for many registration message. Plugins When the agent starts. client plugins can be combined with plugins for different batch Configuration. several nodes. commands that tell the plugin what to do: start. There are three have been configured. such configuration plugins that apply application-specific settings. and reuse custom downloads and invokes the associated plugin script with the plugins. If any of the plugins report errors. NFS server and NFS ‘registered’. and the ability to pass Monitoring. it contacts the coordinator to retrieve from the coordinator to the agent when a node is configured. The is that it offloads the majority of the configuration and request can specify a single node. stop. and the node’s status is set to different applications. The only requirement is that the attributes for the node. security credentials. then coordinator can communicate with agents and vice versa. or Sun Grid registration message from a node it checks to see if the node Engine [8]. If all the node’s dependencies have clusters. disadvantage is that it requires users to re-bundle images to and the agents send stop commands to all of their plugins. The agent passes parameters to the configured. If they have not. It is invoked when the . the coordinator enables the coordinator to manage a larger set of nodes. Commands are specific actions that must be performed by the then the coordinator attempts to configure them as well. After a node has been configured. or Ruby scripts. user-specified parameters. The agent passes makes sure that they have registered. When the coordinator receives a schedulers. Python. resolving any <ref> parameters that Plugins are typically shell. Upon receiving this request. it status is set to ‘failed’. We envision that there could be a repository for the already been configured. to deploy many different types of compute has any dependencies. and the host and port where the coordinator can be contacted. the node is ready to be configured. After checking all the connectivity between nodes so that an application can be plugins. For example. starts the agent process. such as The attributes collected include: the public and private service plugins that start daemon processes. At that point. When the VM boots up. The sends messages to the agents on all nodes to be terminated. the agent custom contextualization data to a VM. When the user is ready to terminate one or pre-installed in the VM image. For each plugin. Perl. before proceeding. the coordinator checks to see if there are any plugin as environment variables when the plugin is invoked. modify. This requires the agent software to be Termination. If the plugin fails with a non-zero exit code. as the availability zone. include the agent software. or an entire monitoring tasks from the coordinator to the agent. application-specific behaviors required of a node. report to the coordinator that they are configured. • The start command tells the plugin to perform the behavior requested by the user. the list of plugins for the node. PBS [26]. then the Plugins are implemented as simple scripts that run on the coordinator waits until all dependencies have been configured nodes to perform all of the actions required by the application. The advantage of this approach more nodes.ability to launch and terminate VMs. behavior of the plugin. at which point the user must between the agent and the plugin. In the future we plan to investigate ways to install the provider to terminate the node(s). agent at runtime to avoid this issue. as well as any other plugins that install software used by the application. C. but can be any executable program that conforms to the then the agent aborts the configuration process and reports the required interface. If there are. agent to configure the node. it uses a provider-specific adapter Plugins are user-defined scripts that implement the to retrieve contextualization data passed by the coordinator. it is sent to the coordinator as part of a node. If all plugins were components: parameters and commands. application hostnames and IP addresses of the node. They are specified in the XML request Upon receiving a message that the node has been document described above. periodically monitors the node by invoking all the node’s The system does not assume anything about the network plugins with the status command. It plugin to implement the plugin lifecycle. they send a request to the coordinator. and to configure the node. They are transferred from the client (or potentially a After the agent receives a command from the coordinator repository) to the coordinator when a node is provisioned. and the node’s Plugins are the modular components of a deployment. and that all dependencies commands to the plugin as arguments. which deployment. There are and to collect attributes about the node and its environment. ID assigned to the node by the coordinator. nodes that depend on the newly configured node. and involves two intervene to correct the problem. such as Condor [18]. may be present. the monitoring plugins that validate the state of the node. many different types of plugins that can be created. Once the agent has retrieved this Several plugins can be combined to define the behavior of a information. then the agent reports the node’s status as configuration variables that can be used to customize the ‘configured’ to the coordinator.

Groups are exit 1 fi used for two purposes. First. for E. can be configured using named groups. This command is invoked group have registered. A. We used identical the stop command. of agents is done using a random key that is generated by the If at any time the plugin exits with a non-zero exit code. and checks to make sure that the CentOS 5. This is CONDOR_HOST = $CONDOR_HOST simpler than specifying dependencies between the node and END $SBIN/condor_master –pidfile $PIDFILE each member of the group. All plugins should Nodes that depend on a group are not configured until all implement this command. EC2 uses a proprietary cloud the condor_master process when it receives the start management system. and for all nodes to register with the coordinator. and Figure 3. This plugin generates a configuration file and starts FutureGrid’s Sierra cloud [7]. Authentication implement this command. of the nodes in the group have been configured. This ensures that the basic attributes of before the node is terminated. Security example. Wrangler adds to this a relatively small amount For example. This command can be used. such as parallel file systems and distributed echo "CONDOR_HOST not specified" caches. then collective service. Upon failure. between two nodes. Dependencies are . or applications using Wrangler. coordinator for each node. while Magellan and Sierra both use the command. EVALUATION environment variable. to query and respond to attributes updated by other nodes. of the node for errors. With that in mind. The status command can be used experiments to determine the overhead of deploying to periodically update the attributes advertised by the node. NERSC’s Magellan cloud [22]. Nodes in a co- • The stop command tells the plugin to stop any running dependent group are not configured until all members of the services and clean up. to verify that a service started by the plugin Wrangler uses SSL for secure communications between all is running. We conducted experiments on three separate clouds: A basic plugin for Condor worker nodes is shown in Amazon EC2.#!/bin/bash -e valid as long as they do not form a cycle that would prevent PIDFILE=/var/run/condor/master. Only plugins that need to monitor the state components of the system. Co-dependent groups enable a limited form of cyclic dependencies and are useful for deploying some peer-to-peer Figure 3: Example plugin used for Condor workers. Only plugins that must the nodes that are collected during registration. When a dependency exists provider. elif [ “$1” == “status” ]. These attributes are merged with the The performance of Wrangler is primarily a function of the node’s existing attributes and can be queried by other nodes in time it takes for the underlying cloud management system to the virtual cluster using <ref> tags or a command-line tool.large instance type. on all condor_master process is running when it receives the status three clouds. The plugin can advertise node attributes by writing key=value pairs to a file specified by the agent in an IV.pid the application from being deployed. the dependent node will not be configured until the other node has been configured. This authentication mechanism then the node’s status is set to failed. and the m1. systems and parallel file systems that require each node implementing the service to be aware of all the others. a node can depend several nodes echo > /etc/condor/condor_config.5 VM images. groups that depend on themselves form co-dependent kill -0 $(cat $PIDFILE) fi groups. start the VMs. then for services such as Memcached clusters where the clients kill –QUIT $(cat $PIDFILE) need to know the addresses of each of the Memcached nodes. Authentication of clients is of the node or long-running services need to accomplished using a username and password. then Second. This experiment correct order so that services and attributes published by one measures the time required to provision N nodes from a single node can be used by another node. the output assumes that the cloud provider’s provisioning service of the plugin is collected and sent to the coordinator to provides the capability to securely transmit the agent’s key to simplify debugging and error diagnosis. and breaks the deadlock that would otherwise • The status command tells the plugin to check the state occur with a cyclic dependency. such as IP be shut down gracefully need to implement this addresses. are available to all group members during command. SBIN=/usr/local/condor/sbin if [ “$1” == “start” ]. Deployment with no plugins D. kills the condor_master process when it receives Eucalyptus cloud management system [24]. configuration. then Applications that deploy sets of nodes to perform a if [ "$CONDOR_HOST" == "" ].local <<END at once by specifying that it depends on the group. Dependencies and Groups The first experiment we performed was provisioning a Dependencies ensure that nodes are configured in the simple vanilla cluster with no plugins. we conducted a few basic use to mount the file system. These types of groups are useful elif [ “$1” == “stop” ]. each VM during provisioning. command. node is being configured. an NFS server node can advertise the address of time for nodes to register and be configured in the correct and path of an exported file system that NFS client nodes can order.

This deployment sets up a Condor pool the nodes have registered. N NFS clients to successfully mount the shared file system.8 s 55. The majority of this time is spent binaries on each worker node. worker nodes with Condor Worker. Using Wrangler we were able to that execute workflow tasks as shown in Figure 5. could be deployed as easily. Data Storage Study data for Sierra with 16 nodes because the failure rate on Sierra Many workflow applications require shared storage while running these experiments was about 8%. and waiting for all the three tiers: a master node using a Condor Master plugin. comparing Table I and Table II. dev. This is a result of two factors. depending on the target cloud and with a shared GlusterFS file system and installs application the number of nodes. Table I: Mean provisioning time for a simple deployment with no plugins. 2 4 8 16 Nodes Nodes Nodes Nodes Amazon 55. 10. Deployment for workflow applications evaluated several different storage configurations that can be used to share data for workflows on Amazon EC2.8) on EC2. Due to we have used for executing real workflow applications in the the large number of experiments required. Note that we were not able to collect A. Although these applications are scientific workflows.5 s 433. possibly due to the increased load on the provider’s as web applications. Amazon 101. file system client.4 sec (std. Second. dev.0 s 508.1 s 131.3 s Sierra 371.2) on Magellan. DAGMan [5]. nodes for each cluster were provisioned in serial. EXAMPLE APPLICATIONS application-specific plugins. NFS. create automatic. repeatable deployments by composing The results of this experiment are shown in Table II. which we measured to be 55.9 s FAIL Table II: Provisioning time for a deployment used for workflow applications. GlusterFS.9 s 112.5 s FAIL The results of this experiment are shown in Table I. and the complexity cloud [12].6 s 206. For larger clusters we observe that the provisioning time is up to twice the maximum observed for one VM.9 s 175.7 s Magellan 101. 4. which added 1-2 seconds onto the total provisioning time for each node. First. Condor different configurations using three different applications and [18]. and N file system nodes with a file system peer plugin. and NFS to create an environment that is similar to what four cluster sizes—a total of 72 different combinations. which systems in order to communicate data products among nodes virtually guaranteed that at least 1 out of every 16 VMs failed. dev.5 s Magellan 173. 2 4 8 16 Nodes Nodes Nodes Nodes Figure 5: Deployment used for workflow applications. peer to peer systems. Recently we conducted a study [12] that B. and distributed databases. This study In the next experiment we again launch a deployment required us to deploy workflows using four parallel storage using Wrangler.2 s 111. such . and PVFS) in six workflow management system [6]. The file system nodes form a group so In this section we describe our experience using Wrangler that worker nodes will be configured after the file system is to deploy scientific workflow applications.5 s 112.6 s 69. and V.6 s 102.3 s 349. it was not possible to deploy the manages the workflow and stores data.8 s Sierra 447.0 s 455.2 s 98. and N worker nodes environments manually. and 428. several outlier VMs that took much longer than expected to start.1) on Sierra. By plugins in different combinations to complete the study. In the future we plan to investigate ways to provision VMs in parallel to reduce this overhead. The deployment consists of downloading and installing software. 88. In most cases we observe that the provisioning time for a virtual cluster is comparable to the time required to provision one VM. but this time we add plugins for the Pegasus systems (Amazon S3. The deployment consists of a master node that of the configurations. 104.9 sec (std.7 s 500. we can see it takes on the The deployments used in the study were similar to the one order of 1-2 minutes for Wrangler to run all the plugins once shown in Figure 4.1 s 185. on Magellan and Sierra there were Figure 4: Deployment used in the data storage study. network and services caused by the larger number of simultaneous requests. in a compute cluster. other applications.7 sec (std.

Of these. such as SGE. and monitor interdependent services in the cloud. Our respond to failures. modular plugins to define the systems typically assume a fixed architecture that consists of a behavior of a node. FutureGrid Sierra. This example illustrates how Wrangler can be used to set up experiments for distributed systems research. our approach supports complex The rapidly-developing field of cloud computing offers application architectures consisting of many interdependent new opportunities for distributed applications. are changing the way we think distributed environment. the other concern of this work. Existing few well-known examples. In contrast. Configuration management deals with the problem of virtualization. As such. our system is designed to to easily install and maintain high-performance computing support multiple cloud providers. and NERSC Magellan clouds using the sense that one could easily create a Wrangler plugin that Wrangler. Generating periodograms for the hundreds of thousands of light curves that have been released by the Kepler mission is a computationally intensive job that demands high-throughput distributed computing.d can have only one service. Our approach is similar to these infrastructure clouds support the deployment of isolated systems in that configuration is one of its primary concerns. and worker nodes running in configuration. among others. In order to manage these computations we developed a workflow using Figure 6: Deployment used to execute periodograms the Pegasus workflow management system [6]. most well known example. monitor running VMs. In the past many cluster management the VM image and cannot be defined by the user when the systems have been developed to enable system administrators application is deployed. such as on-demand provisioning. Periodograms Kepler [21] is a NASA satellite that uses high-precision photometry to detect planets outside our solar system. but each node in explored by several previous research efforts. NCB supports roles. controlled by the user. and elasticity. head node and N worker nodes. features of cloud computing. as well as the emergence of maintaining a known. or detect and addressed by configuration management systems. In addition. The deployment This work is related to virtual appliances [30] in that we used several different plugins to set up and configure the are interested in deploying application services in the cloud. and allow that system manage node the cloud manages the workflow. Rocks [28] is perhaps the Nimbus-based clouds. These include cloudinit. Our system is similar to the Nimbus Context Broker VI. user-defined plugins. In order to take advantage of cloud approach can be seen as complementary to these systems in . We deployed this application across the Amazon EC2. CONCLUSION Globus. is not and configure software. or VII.32.23]. The deployment configuration is illustrated in installs a configuration management system on the nodes in a Figure 6. The unique nodes and custom. provisioning. deploy complex applications across multiple cloud providers. virtual machines. however. but do not provide functionality to deploy however. while NCB works best with clusters [3. a master node running outside deployment. Many different configuration about deploying and executing distributed applications. other groups are recognizing the need for cluster is deployed on physical machines that are owned and deployment services. Cloudinit. Puppet [13]. This is complementary to that of the virtual appliances community application successfully demonstrated Wrangler’s ability to as well. our research plugin to install application binaries. They also typically support only a single type of cluster software. which are similar to Wrangler Configuring compute clusters is a well-known systems plugins with the exception that NCB roles must be installed in administration problem. management and policy engines have been developed for There is still much work to be done in investigating the UNIX systems. workflows. Analyzing these light curves to find new planets requires the calculation of periodograms. and Chef [27] are a best way to manage cloud environments. the three cloud sites execute workflow tasks. These systems assume that the Recently.d [2]. Condor. and do not support virtual machines One example is cloudinit. consistent state across many hosts in a commercial cloud providers. software on the worker nodes.d Constructing clusters on top of virtual machines has been services are similar to Wrangler plugins. StarCluster [31]. These users to compose several. Cfengine [4]. while Wrangler enables VMPlants [17]. which identify the periodic dimming caused by a planet as it orbits its star. and a Periodograms appliances for distributed applications. The Kepler mission periodically releases time-series datasets of star brightness called light curves. which enables users to deploy provisioned from cloud providers.34].ready. B.9. RELATED WORK (NCB) [14] used with the Nimbus cloud computing system [15]. and are developing similar solutions. and others [20.29. In this deployment. including a Condor Worker The focus of our project is on deploying collections of plugin to deploy and configure Condor.

Krsul. Deelman. and G. G.B. “Virtual Clusters on the ACKNOWLEGEMENTS Fly . 7- [2] J.mit.” 2004 ACM/IEEE conference on address in the future. M. http://cs. J. and virtual machines. “Easy and reliable cluster management: the self-management experience of Fire Phoenix. pp. and G. Scalable. the Sierra Virtual Clusters. [16] K. M. “NPACI Rocks: tools and [1] Amazon. S. J.” IEEE Internet Computing. vol. M.” Scientific Programming. T. Bresnahan. Wei. [22] NERSC. Vockler. 2000. 13.-S.opscode. [32] P. Ganguly.” 1st IEEE/ACM International Symposium on Cluster Computing (ScienceCloud). “System management framework and tools for Beowulf cluster. Grzegorczyk. Chef.” 8th International Conference of Distributed Computing Systems. provision virtual clusters for scientific workflow applications [14] K. Jacob. “OSCAR: Open Source Cluster [30] C. Zeldovich. Lin-ping. Vahi. vol. Freeman. N. REFERENCES [27] Opscode. Laity. [5] DAGMan. Brim. Good.Fast. Bruno. Keahey and T. Berriman. http://www. Juve and E. [24] D. “Sun Grid Engine: towards creating a compute power Application. 2006. pp. 219-237. Appliance Launches in Infrastructure Clouds. and J.J.” Ottowa Linux Symposium. Kesselman. 2011. Currently. Gentzsch. “A site configuration engine. Papadopoulos. Deelman. 2001. and resources of the National Cluster Computing and the Grid (CCGrid 09). Rosenblum.” 7th IEEE International This work was sponsored by the National Science Symposium on Cluster Computing and the Grid (CCGrid 07). M. Deelman. Elastic Compute Cloud (EC2). Nurmi.com/software/scyld_clusterware. and S. Wrangler assumes that users Supercomputing (SC 04). and the Skynet cloud at 08).” 9th IEEE/ACM techniques for re-configuring deployments. Fenn. distributed applications in the cloud easy.gov/. Scott.J. We have used these virtual clusters to run several hundred “Science clouds: Early experiences in cloud computing for scientific workflows for applications in astronomy. 1995.” Fourth Katz. Keahey. “Data Sharing Options for Scientific Workflows on of services to support distributed applications. “Condor: A Hunter of Idle problem because users often leave virtual clusters running Workstations. “Sky So far we have found that Wrangler makes deploying Computing. 2003. vol. “The Eucalyptus Open-source Cloud- makes use of resources supported in part by the NSF under computing System.” http://aws. Kepler.L. 5. 19-25. Scyld ClusterWare. and Flexible Installation.com. “Pegasus: A framework for mapping complex scientific workflows International Conference/Exhibition on High Performance Computing in onto distributed systems. [21] NASA. Juve. “Virtual Appliances for Deploying and [4] M. D.” 20th International Parallel and Distributed Processing Symposium (IPDPS 06).org/. pp. G. B. S. E.” Teragrid Conference. Angskun. ISI. Zhi-Hong. 2010. 13. no. [17] I. http://magellan. [26] OpenPBS. In the future we plan to [19] P. Chow. 2009. K. 2008. Application Resources. 31. G. 2007. and D. Su. M. and T.perceus. R.P. complex.G. This research Youseff. In this paper we presented the design and implementation [11] G. [6] E. 2009.org.amazon. Mehta. “Managing 8. B. Kanies.B. and J. the Asia-Pacific Region. techniques for easily deploying manageable Linux clusters. Freeman. “Elastic Site: Using Clouds to investigate solutions for automatically handling failures by re. Vahi. Rynge. J.-H.” Workshop on Cloud-based Services and applications on infrastructure clouds. G. W. no. http://web. J. Freeman. 1. “Experiences Using Cloud Computing for A Scientific Workflow [8] W. 10). Livny. Singh. T. Nishimura. 15. [31] StarCluster. A.” 2010 ACM/IEEE conference on Supercomputing (SC applications over time. and S. http://www. and M. 2011. [3] M. 2009. and M. the Magellan cloud at NERSC. 2009. Brumley. In practice this has been a [18] M. LaBissoniere.J. Blythe. Sapuntzakis. demand. Lei.” 9th IEEE/ACM International Symposium on grant 091812 (FutureGrid). bioinformatics and applications. 2011. no. can respond to failures manually. B. Perceus/Warewulf. Murphy. pp. Katz. Jun. A. Kagey. Fortes.resources. Keahey. K. We also plan to develop Provisioning of Virtual Organization Clusters. and by implementing mechanisms Symposium on Cluster.openpbs. Juve. vol. and M. Maechling. 8.” Login. Matsuoka. and monitors Amazon EC2. and D. Foundation (NSF) under award OCI-0943725. Elastically Extend Site Resources.” 10th IEEE/ACM International provisioning failed nodes.wisc. Dan. Figueiredo. http://www. G. Mutka. no.S.C.A.edu/stardev/cluster/. Gil. 3. and for International Symposium on Cluster Computing and the Grid (CCGrid dynamically scaling deployments in response to application 09). and H.” 2nd Workshop on Scientific Cloud Computing grid.” 20th International Symposium on High Performance Distributed assist users with these tasks. A. Wolski. The system interfaces Applications in conjunction with 5th IEEE International Conference on e-Science (e-Science 2009).J. Mehta. Zagorodnov. Tsugawa. Mehta. Marshall. Uthayopas. Figueiredo. 2001. to fail gracefully or provide degraded service when re- [20] M. and R. Freeman. [25] OpenNebula.edu/condor/dagman. Tsugawa. G. “Puppet: Next Generation Configuration Management. E. Fortes. J. Jian-Feng. http://www.W. M. M. M. W. Cloud and Grid Computing (CCGrid 2010). Maneesilp. unattended for long periods. [28] P. Maruyama. [34] Z. with several different cloud resource providers to provision [12] G. Z. Litzkow. E. D. “Wrangler: Virtual Cluster Provisioning for the Cloud. 2010.org.nersc. Keahey. Lam. Berriman. Mattson. C. “Contextualization: Providing One-Click on Amazon EC2.S. 2005. [9] Infiniscale. 3.com/chef. http://www. and G. 2003. new provisioning tools need to be developed to [10] G.” Cloud Computing and Its Applications. Feb. 707-725. . We have been using Wrangler since May 2010 to [13] L. Computing (HPDC). T.” 4th International Conference on e-Science (e-Science and India clouds on the FutureGrid. Berriman. K. Deelman. Keahey. but we “VMPlants: Providing and Managing Virtual Machine Execution have encountered some issues in using it that we plan to Environments for Grid Computing. earth science. R. Concurrency and Computation: Practice and Experience. and the Grid (CCGrid ’01). vol. http://futuregrid.penguincomputing. 2008. Juve. 2004. and S. Energy Research Scientific Computing Center (Magellan). and K.B. Burgess.” USENIX Computing Maintaining Software.gov. Administration. “Scientific Workflow of a system used for automatically deploying distributed Applications on Amazon EC2. no. N. Y. K.” 17th USENIX Conference on System Systems. Soman.opennebula.M.A. 1988. Matsunaga. Vahi. Paisitbenchapol. [33] J. J. coordinates the configuration and initiation P. T. C. L. http://kepler. Deelman. Magellan. 43-51.com/ec2. 2006. J. Fortes. “Dynamic provisioning is not possible. Obertelli. [23] H. [29] Penguin Computing. [15] K. Chandra. Zhang. Berman.J. Goasguen. R.org/.nasa. [7] FutureGrid.